MiroThinker 1.7 Mini (30B MoE)
⚠️ Despite the "Mini" name, this is a full 30B MoE model (Qwen3-30B-A3B). 3B = active params per forward pass, NOT model size. ~82 GB full weights. Requires H100 80GB or multi-GPU. 256K context, multilingual (EN/ZH+), deep-research agent with tool calls. Released 11 Mar 2026. Apache 2.0.
Can MiroThinker 1.7 Mini (30B MoE) run locally?
MiroThinker 1.7 Mini (30B MoE) is best suited for high-end workstations with 64 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 48 GB RAM.
Search term for LM Studio or compatible runtimes: mirothinker-1.7-mini
Hugging Face repository: miromind-ai/MiroThinker-1.7-mini
Strengths
- ⚠️ Despite the "Mini" name, this is a full 30B MoE model (Qwen3-30B-A3B). 3B = active params per forward pass, NOT model size. ~82 GB full weights. Requires H100 80GB or multi-GPU. 256K context, multilingual (EN/ZH+), deep-research agent with tool calls. Released 11 Mar 2026. Apache 2.0.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- reasoning
- code
- power
- quality
Benchmarks
Speed: 3/10
Quality: 9/10
Coding: 8/10
Reasoning: 10/10
Technical details
Developer: mirothinker
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2026-03