Local LLM model page

Phi-4 Reasoning (14B)

Microsoft Phi-4 reasoning variant. Top choice for 14B reasoning — much better than DeepSeek R1 14B. Rivals larger models on math & logic.

Parameters
14B
Minimum RAM
12 GB
Model size
8.5 GB
Quantization
Q5_K_M

Can Phi-4 Reasoning (14B) run locally?

Phi-4 Reasoning (14B) is best suited for mainstream Macs and PCs with 16 GB RAM. LocalClaw recommends Q5_K_M as the default quantization, with at least 12 GB RAM.

Search term for LM Studio or compatible runtimes: phi-4-reasoning

Hugging Face repository: microsoft/Phi-4-reasoning-GGUF

reasoningcodepower

Strengths

  • Rivals much larger models on math & logic
  • MIT license
  • Strong chain-of-thought
  • 544K downloads

Limitations

  • Needs 12GB+ RAM
  • English-only
  • Reasoning overhead makes it slower

Best use cases

  • Mathematics
  • Logical reasoning
  • Scientific analysis
  • Complex problem solving

Benchmarks

Speed: 6/10

Quality: 8/10

Coding: 9/10

Reasoning: 10/10

Technical details

Developer: Microsoft Research

License: MIT

Context window: 32,768 tokens

Architecture: Transformer with reasoning-enhanced training

Released: 2025-04