Local LLM model page

Llama 3.2 (3B)

Meta's compact powerhouse. Excellent instruction following for its size.

Parameters
3B
Minimum RAM
4 GB
Model size
2.2 GB
Quantization
Q5_K_M

Can Llama 3.2 (3B) run locally?

Llama 3.2 (3B) is best suited for entry-level laptops and desktops. LocalClaw recommends Q5_K_M as the default quantization, with at least 4 GB RAM.

Search term for LM Studio or compatible runtimes: llama-3.2-3b-instruct

Hugging Face repository: lmstudio-community/Llama-3.2-3B-Instruct-GGUF

chatlightspeedgeneral

Strengths

  • Great instruction following for 3B
  • 128K context
  • Good speed
  • 8 languages natively

Limitations

  • Weaker than 7B+ models on complex tasks
  • Limited coding

Best use cases

  • Mobile AI
  • Quick Q&A
  • Text summarization
  • Classification

Benchmarks

Speed: 9/10

Quality: 5/10

Coding: 5/10

Reasoning: 5/10

Technical details

Developer: Meta AI

License: Llama 3.2 Community License

Context window: 131,072 tokens

Architecture: Transformer decoder-only

Released: 2024-09