Local LLM model page

Qwen 3 Coder (30B)

Qwen flagship coding model. Designed for agentic coding with 256K context. Outperforms Claude 3.5 Sonnet on SWE-bench. Apache 2.0.

Parameters
30B
Minimum RAM
24 GB
Model size
18 GB
Quantization
Q4_K_M

Can Qwen 3 Coder (30B) run locally?

Qwen 3 Coder (30B) is best suited for power-user machines with 32 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 24 GB RAM.

Search term for LM Studio or compatible runtimes: qwen3-coder-30b

Hugging Face repository: Qwen/Qwen3-Coder-30B-GGUF

codepowerquality

Strengths

  • Outperforms Claude 3.5 Sonnet on SWE-bench
  • 256K context — handles entire codebases
  • Designed for agentic coding (multi-step, autonomous)
  • Apache 2.0
  • Excellent at refactoring and multi-file edits

Limitations

  • 24GB+ RAM required
  • Focused on code — less versatile for general chat
  • Can be verbose in explanations

Best use cases

  • Agentic coding (autonomous code generation)
  • Full codebase refactoring
  • Multi-file code review
  • CI/CD automation
  • Code completion and debugging

Benchmarks

Speed: 5/10

Quality: 9/10

Coding: 10/10

Reasoning: 9/10

Technical details

Developer: Alibaba Cloud (Qwen Team)

License: Apache 2.0

Context window: 262,144 tokens

Architecture: Transformer (decoder-only) optimized for code generation and agentic coding workflows

Released: 2025-07