Local LLM model page
MiroThinker 1.7 (30B MoE)
MiroMind AI second-gen deep-research agent. 30B MoE with stronger tool-use, 256K context, SOTA on BrowseComp-ZH (Chinese research). Designed for agentic workflows, not casual chat. Released March 2026. Apache 2.0.
Parameters
30B (3B active, MoE)
Minimum RAM
48 GB
Model size
18 GB
Quantization
Q4_K_M
Can MiroThinker 1.7 (30B MoE) run locally?
MiroThinker 1.7 (30B MoE) is best suited for high-end workstations with 64 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 48 GB RAM.
Search term for LM Studio or compatible runtimes: mirothinker-1.7
Hugging Face repository: miromind-ai/MiroThinker-1.7
reasoningcodepowerquality
Strengths
- MiroMind AI second-gen deep-research agent. 30B MoE with stronger tool-use, 256K context, SOTA on BrowseComp-ZH (Chinese research). Designed for agentic workflows, not casual chat. Released March 2026. Apache 2.0.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- reasoning
- code
- power
- quality
Benchmarks
Speed: 3/10
Quality: 9/10
Coding: 8/10
Reasoning: 10/10
Technical details
Developer: mirothinker
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2026-03