Local LLM model page

DeepSeek R1 Distill (7B)

DeepSeek's reasoning model distilled to 7B. Shows thought process step-by-step. 65.5M downloads total.

Parameters
7B
Minimum RAM
8 GB
Model size
4.5 GB
Quantization
Q4_K_M

Can DeepSeek R1 Distill (7B) run locally?

DeepSeek R1 Distill (7B) is best suited for entry-level laptops and desktops. LocalClaw recommends Q4_K_M as the default quantization, with at least 8 GB RAM.

Search term for LM Studio or compatible runtimes: deepseek-r1-distill-qwen-7b

Hugging Face repository: lmstudio-community/DeepSeek-R1-Distill-Qwen-7B-GGUF

reasoningstandard

Strengths

  • Shows reasoning chain-of-thought
  • 65.5M total R1 downloads
  • MIT license
  • Affordable reasoning

Limitations

  • Verbose thinking tokens
  • Distilled — not as strong as full R1
  • Sometimes overthinks simple tasks

Best use cases

  • Math problems
  • Logical reasoning
  • Step-by-step explanations
  • Education

Benchmarks

Speed: 8/10

Quality: 7/10

Coding: 6/10

Reasoning: 8/10

Technical details

Developer: DeepSeek AI

License: MIT

Context window: 131,072 tokens

Architecture: Transformer distilled from DeepSeek-R1

Released: 2025-01