Local LLM model page
Devstral (24B)
Best open model for coding agents. Designed for agentic coding workflows. 391K downloads.
Parameters
24B
Minimum RAM
20 GB
Model size
14 GB
Quantization
Q4_K_M
Can Devstral (24B) run locally?
Devstral (24B) is best suited for power-user machines with 32 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 20 GB RAM.
Search term for LM Studio or compatible runtimes: devstral-24b
Hugging Face repository: mistralai/Devstral-Small-2507-GGUF
codepower
Strengths
- Best open model for coding agents
- Apache 2.0
- 128K context
- Designed for agentic workflows
Limitations
- Needs 20GB+ RAM
- Coding specialist — limited general chat
Best use cases
- Agentic coding
- Automated software development
- Code review
- Complex refactoring
Benchmarks
Speed: 5/10
Quality: 8/10
Coding: 10/10
Reasoning: 8/10
Technical details
Developer: Mistral AI
License: Apache 2.0
Context window: 131,072 tokens
Architecture: Transformer optimized for agentic coding
Released: 2025-07