Local LLM model page
DeepSeek V4 Flash (284B MoE)
Efficient DeepSeek V4 variant: 284B total, 13B active, 1M-token context. Flash-Max can approach Pro reasoning with larger thinking budget. MIT licensed.
Parameters
284B (13B active)
Minimum RAM
256 GB
Model size
170 GB
Quantization
FP4/FP8
Can DeepSeek V4 Flash (284B MoE) run locally?
DeepSeek V4 Flash (284B MoE) is best suited for server-grade or multi-GPU systems. LocalClaw recommends FP4/FP8 as the default quantization, with at least 256 GB RAM.
Search term for LM Studio or compatible runtimes: deepseek-v4-flash
Hugging Face repository: deepseek-ai/DeepSeek-V4-Flash
chatcodereasoningpoweragenticlong-contextgeneral
Strengths
- Efficient DeepSeek V4 variant: 284B total, 13B active, 1M-token context. Flash-Max can approach Pro reasoning with larger thinking budget. MIT licensed.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- chat
- code
- reasoning
- power
- agentic
- long-context
- general
Benchmarks
Speed: 5/10
Quality: 9/10
Coding: 9/10
Reasoning: 9/10
Technical details
Developer: deepseek-flash
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2026-05