Local LLM model page
DeepSeek V3.2 (37B/671B MoE)
DeepSeek's massive MoE flagship. 37B active out of 671B total. Exceptional coding, reasoning and general capabilities. Ranks #6 on global usage leaderboards with 29B monthly tokens. MIT licensed.
Parameters
37B (671B MoE)
Minimum RAM
48 GB
Model size
40 GB
Quantization
Q4_K_M
Can DeepSeek V3.2 (37B/671B MoE) run locally?
DeepSeek V3.2 (37B/671B MoE) is best suited for high-end workstations with 64 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 48 GB RAM.
Search term for LM Studio or compatible runtimes: deepseek-v3.2
Hugging Face repository: deepseek-ai/DeepSeek-V3.2-GGUF
chatcodereasoningpowerqualitygeneral
Strengths
- Latest DeepSeek MoE flagship
- Ranks #6 globally with 29B monthly tokens
- MIT license
- Exceptional coding and reasoning
Limitations
- Requires 48GB+ RAM
- Server-grade recommended
- Complex setup
Best use cases
- Enterprise AI
- Research
- Complex coding
- Reasoning at scale
Benchmarks
Speed: 3/10
Quality: 10/10
Coding: 10/10
Reasoning: 10/10
Technical details
Developer: DeepSeek AI
License: MIT
Context window: 131,072 tokens
Architecture: Mixture of Experts (MoE) — 671B total, ~37B active
Released: 2025-12