Local LLM model page
Magistral (24B)
Mistral efficient reasoning model. Strong chain-of-thought at 24B. 477K downloads.
Parameters
24B
Minimum RAM
20 GB
Model size
14 GB
Quantization
Q4_K_M
Can Magistral (24B) run locally?
Magistral (24B) is best suited for power-user machines with 32 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 20 GB RAM.
Search term for LM Studio or compatible runtimes: magistral-24b
Hugging Face repository: mistralai/Magistral-24B-GGUF
reasoningpower
Strengths
- Efficient reasoning model
- Apache 2.0
- Strong chain-of-thought
- 477K downloads
Limitations
- Needs 20GB+ RAM
- Reasoning overhead
- New — less community support
Best use cases
- Complex reasoning
- Math
- Strategic planning
- Analysis
Benchmarks
Speed: 5/10
Quality: 8/10
Coding: 7/10
Reasoning: 9/10
Technical details
Developer: Mistral AI
License: Apache 2.0
Context window: 131,072 tokens
Architecture: Transformer with reasoning-enhanced training
Released: 2025-07