Local LLM model page

MiniMax M2.1

MiniMax's open-source MoE model. Outstanding long-context capabilities up to 200K tokens. Ranks #8 on global usage leaderboards with 23.5B monthly tokens. Apache 2.0.

Parameters
45B (MoE)
Minimum RAM
24 GB
Model size
18 GB
Quantization
Q4_K_M

Can MiniMax M2.1 run locally?

MiniMax M2.1 is best suited for power-user machines with 32 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 24 GB RAM.

Search term for LM Studio or compatible runtimes: minimax-m2.1

Hugging Face repository: MiniMaxAI/MiniMax-M2.1-GGUF

chatcodepowerqualitygeneral

Strengths

  • MiniMax's open-source MoE model. Outstanding long-context capabilities up to 200K tokens. Ranks #8 on global usage leaderboards with 23.5B monthly tokens. Apache 2.0.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • chat
  • code
  • power
  • quality
  • general

Benchmarks

Speed: 5/10

Quality: 9/10

Coding: 8/10

Reasoning: 9/10

Technical details

Developer: minimax

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2025-09