Local LLM model page
Phi-3 (3.8B)
Microsoft lightweight powerhouse. Punches way above its weight. 11.3M downloads. Great for edge devices.
Parameters
3.8B
Minimum RAM
6 GB
Model size
2.3 GB
Quantization
Q5_K_M
Can Phi-3 (3.8B) run locally?
Phi-3 (3.8B) is best suited for entry-level laptops and desktops. LocalClaw recommends Q5_K_M as the default quantization, with at least 6 GB RAM.
Search term for LM Studio or compatible runtimes: phi-3-mini-4k-instruct
Hugging Face repository: lmstudio-community/Phi-3-mini-4k-instruct-GGUF
chatreasoninglightspeed
Strengths
- 11.3M downloads
- MIT license
- Great for edge devices
- Pioneered small-model-big-performance
Limitations
- Only 4K context
- English-only
- Superseded by Phi-4
Best use cases
- Quick Q&A
- Edge deployment
- Education
- Lightweight tasks
Benchmarks
Speed: 9/10
Quality: 6/10
Coding: 6/10
Reasoning: 6/10
Technical details
Developer: Microsoft Research
License: MIT
Context window: 4,096 tokens
Architecture: Transformer decoder-only
Released: 2024-04