Local LLM model page
DeepSeek Coder V2 (16B)
MoE code model rivaling GPT4-Turbo on coding benchmarks. 1.1M downloads.
Parameters
16B
Minimum RAM
12 GB
Model size
9.5 GB
Quantization
Q4_K_M
Can DeepSeek Coder V2 (16B) run locally?
DeepSeek Coder V2 (16B) is best suited for mainstream Macs and PCs with 16 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 12 GB RAM.
Search term for LM Studio or compatible runtimes: deepseek-coder-v2-lite-instruct
Hugging Face repository: bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
codepower
Strengths
- MoE code model rivaling GPT4-Turbo on coding benchmarks. 1.1M downloads.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- code
- power
Benchmarks
Speed: 6/10
Quality: 8/10
Coding: 9/10
Reasoning: 7/10
Technical details
Developer: deepseek
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2024-06