Local LLM model page

Mistral Small 3.2 (24B)

Mistral AI's latest dense 24B. Improved instruction following, function calling, and reduced repetition. Strong European-language support. 128K context. Apache 2.0.

Parameters
24B
Minimum RAM
24 GB
Model size
14 GB
Quantization
Q5_K_M

Can Mistral Small 3.2 (24B) run locally?

Mistral Small 3.2 (24B) is best suited for power-user machines with 32 GB RAM. LocalClaw recommends Q5_K_M as the default quantization, with at least 24 GB RAM.

Search term for LM Studio or compatible runtimes: mistral-small-3.2-24b-instruct

Hugging Face repository: mistralai/Mistral-Small-3.2-24B-Instruct-2506-GGUF

chatcodepowergeneralreasoning

Strengths

  • Mistral AI's latest dense 24B. Improved instruction following, function calling, and reduced repetition. Strong European-language support. 128K context. Apache 2.0.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • chat
  • code
  • power
  • general
  • reasoning

Benchmarks

Speed: 6/10

Quality: 8/10

Coding: 8/10

Reasoning: 8/10

Technical details

Developer: mistral

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2025-06