Qwen 3.5 MoE (122B/10B active)
Large MoE model with only 10B active params. 60% cheaper to run than Qwen3-Max. 256K context. Top-tier reasoning, coding and multilingual. Hybrid think/non-think. Apache 2.0.
Best local multilingual AI models for translation, bilingual chat, international support and non-English reasoning. Ranked from the LocalClaw model database with RAM requirements, quantization and links to static model pages.
For multilingual, start with Qwen 3.5 MoE (122B/10B active) if your hardware fits it. If not, choose the highest-ranked model that fits your RAM tier and preferred quantization.
Large MoE model with only 10B active params. 60% cheaper to run than Qwen3-Max. 256K context. Top-tier reasoning, coding and multilingual. Hybrid think/non-think. Apache 2.0.
Alibaba's next-gen MoE with hybrid-gated DeltaNet attention. Only 3B active params — runs at dense 7B speed with 70B quality. 256K native context (extensible to 1M). Hybrid thinking mode. Apache 2.0.
Moonshot AI trillion-parameter MoE flagship. 32B active params per token with 384 experts. Matches or beats GPT-4 Turbo on MMLU, GSM8K, HumanEval. Agentic & tool-use specialist. Server-grade only. Modified MIT.
Flagship open-source Qwen 3.5. Only 17B active params despite 397B total — world-class quality at MoE efficiency. Matches GPT-4o on major benchmarks. Requires multi-GPU or server-grade hardware. Apache 2.0.
Meta Llama 4 Maverick — 128-expert MoE flagship. Matches or beats GPT-4o and Gemini 2.0 Flash on reasoning, coding and multimodal benchmarks. 1M-token context. Server-grade hardware only. Llama 4 Community License.
DeepSeek frontier MoE with 1M-token context, hybrid compressed attention and top-tier coding/reasoning. MIT licensed. Datacenter-grade only.
Z.ai next-generation flagship for agentic engineering. Stronger coding, long-horizon tool use, SWE-Bench Pro, Terminal-Bench and repo generation. MIT licensed.
Experimental V3.2 with DeepSeek Sparse Attention (DSA) — halves inference cost vs V3.1 on long context while keeping quality. 128K context, improved coding & tool-use. MIT licensed. Server-grade.
Zhipu AI flagship — full GLM 4.6. 200K context, strong tool-calling & agentic workflows. Competes with Claude 3.5 Sonnet on reasoning and code. MIT licensed. Server-grade hardware.
InclusionAI's MIT-licensed instruct MoE optimized for fast agent workloads. 104B total parameters, only 7.4B active, hybrid linear attention, 262K context and strong tool-use / multi-step execution with high token efficiency.
Qwen 3.6 flagship dense model. Hybrid thinking mode with /think toggle for deep chain-of-thought reasoning. 128K context, 29+ languages. Significantly outperforms Qwen3.5-27B on reasoning, coding & math. Apache 2.0.
Gemma 4 MoE flagship-for-workstations: 26B total with ~4B active parameters. 256K context and excellent quality-per-watt for local inference. Apache 2.0.
Zhipu AI's efficient MoE powerhouse. 106B total parameters, only 14B active at inference — dense-model speed with much larger model quality. Clearly the best in the 16–24GB RAM range. Outperforms Llama 3.3 70B. Apache 2.0.
Cohere open-weight flagship optimised for agentic workflows and long-context RAG. 256K context, excellent multilingual coverage (23 languages). CC-BY-NC 4.0 — non-commercial.
Xiaomi MiMo flagship MoE for demanding agentic, software engineering and long-horizon tasks. 1M-token context, FP8, strong instruction following. MIT licensed.
Near GPT-4 intelligence locally. Thinking mode demolishes hard problems. The local AI dream.
Moonshot AI's agentic flagship. 1T total MoE parameters with 32B active per forward pass. Unmatched long-context reasoning at 256K tokens. Designed for complex agentic tasks and tool use. Model License — check moonshotai.com for commercial terms.
MiniMax MoE flagship with 10B active params and 4M-token long-context. Specialised for agentic coding and tool-use. Competitive with GPT-4 class models at a fraction of the inference cost. MIT licensed.
LocalClaw ranks models using their tags plus relative benchmark scores for speed, quality, coding and reasoning. The goal is a practical local setup recommendation, not a synthetic leaderboard.