Local LLM model page
DeepCoder (14B)
O3-mini level open coder. Strong reasoning + coding combo. 326K downloads.
Parameters
14B
Minimum RAM
12 GB
Model size
8.5 GB
Quantization
Q4_K_M
Can DeepCoder (14B) run locally?
DeepCoder (14B) is best suited for mainstream Macs and PCs with 16 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 12 GB RAM.
Search term for LM Studio or compatible runtimes: deepcoder-14b-preview
Hugging Face repository: deepseek-ai/DeepCoder-14B-Preview-GGUF
codereasoningpower
Strengths
- O3-mini level open coder. Strong reasoning + coding combo. 326K downloads.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- code
- reasoning
- power
Benchmarks
Speed: 6/10
Quality: 8/10
Coding: 9/10
Reasoning: 8/10
Technical details
Developer: deepcoder
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2025-03