Local LLM model page

Codestral (22B)

Mistral's first code model. Supports 80+ programming languages. 476K downloads.

Parameters
22B
Minimum RAM
16 GB
Model size
13 GB
Quantization
Q4_K_M

Can Codestral (22B) run locally?

Codestral (22B) is best suited for mainstream Macs and PCs with 16 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 16 GB RAM.

Search term for LM Studio or compatible runtimes: codestral-22b-v0.1

Hugging Face repository: lmstudio-community/Codestral-22B-v0.1-GGUF

codepower

Strengths

  • Mistral's first code model
  • 80+ languages
  • Strong completion and generation
  • 476K downloads

Limitations

  • Non-production license
  • 32K context
  • Not for commercial use without license

Best use cases

  • Code generation
  • Code completion
  • Refactoring
  • Code review

Benchmarks

Speed: 5/10

Quality: 7/10

Coding: 9/10

Reasoning: 7/10

Technical details

Developer: Mistral AI

License: MNPL (Non-Production License)

Context window: 32,768 tokens

Architecture: Transformer optimized for code

Released: 2024-05