Local LLM model page

Llama 4 Maverick (17B/400B MoE)

Meta Llama 4 Maverick — 128-expert MoE flagship. Matches or beats GPT-4o and Gemini 2.0 Flash on reasoning, coding and multimodal benchmarks. 1M-token context. Server-grade hardware only. Llama 4 Community License.

Parameters
400B (17B active, 128 experts)
Minimum RAM
384 GB
Model size
240 GB
Quantization
Q4_K_M

Can Llama 4 Maverick (17B/400B MoE) run locally?

Llama 4 Maverick (17B/400B MoE) is best suited for server-grade or multi-GPU systems. LocalClaw recommends Q4_K_M as the default quantization, with at least 384 GB RAM.

Search term for LM Studio or compatible runtimes: llama-4-maverick-17b-128e-instruct

Hugging Face repository: meta-llama/Llama-4-Maverick-17B-128E-Instruct

chatvisionreasoningmultimodalquality

Strengths

  • Meta Llama 4 Maverick — 128-expert MoE flagship. Matches or beats GPT-4o and Gemini 2.0 Flash on reasoning, coding and multimodal benchmarks. 1M-token context. Server-grade hardware only. Llama 4 Community License.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • chat
  • vision
  • reasoning
  • multimodal
  • quality

Benchmarks

Speed: 2/10

Quality: 10/10

Coding: 10/10

Reasoning: 10/10

Technical details

Developer: llama

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2025-04