Local LLM model page
SmolLM 2 (1.7B)
Ultra-compact HuggingFace model. Surprisingly capable for its tiny size. 1.9M downloads.
Parameters
1.7B
Minimum RAM
4 GB
Model size
1 GB
Quantization
Q8_0
Can SmolLM 2 (1.7B) run locally?
SmolLM 2 (1.7B) is best suited for entry-level laptops and desktops. LocalClaw recommends Q8_0 as the default quantization, with at least 4 GB RAM.
Search term for LM Studio or compatible runtimes: smollm2-1.7b-instruct
Hugging Face repository: HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF
chatlightspeed
Strengths
- Ultra-compact HuggingFace model. Surprisingly capable for its tiny size. 1.9M downloads.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- chat
- light
- speed
Benchmarks
Speed: 10/10
Quality: 4/10
Coding: 3/10
Reasoning: 3/10
Technical details
Developer: smollm
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2024-11