Local LLM model page

Mistral Large (123B)

Mistral flagship. 128K context. Top-tier coding and multilingual. 262K downloads. Requires serious hardware.

Parameters
123B
Minimum RAM
96 GB
Model size
70 GB
Quantization
Q4_K_M

Can Mistral Large (123B) run locally?

Mistral Large (123B) is best suited for large-memory workstations. LocalClaw recommends Q4_K_M as the default quantization, with at least 96 GB RAM.

Search term for LM Studio or compatible runtimes: mistral-large-instruct

Hugging Face repository: lmstudio-community/Mistral-Large-Instruct-2411-GGUF

chatcodequality

Strengths

  • Mistral flagship
  • 128K context
  • Top-tier coding and multilingual
  • 262K downloads

Limitations

  • Requires 96GB+ RAM
  • Research license — commercial restrictions
  • Very slow inference

Best use cases

  • Enterprise AI
  • Complex coding
  • Research
  • Multilingual tasks

Benchmarks

Speed: 1/10

Quality: 10/10

Coding: 9/10

Reasoning: 10/10

Technical details

Developer: Mistral AI

License: Mistral Research License

Context window: 131,072 tokens

Architecture: Transformer decoder-only, 128K context

Released: 2024-11