Local LLM model page

GLM 4.6 Air (12B)

Zhipu AI lightweight flagship. Strong bilingual CN/EN with hybrid thinking mode, 200K context and tool calling. Apache 2.0 — excellent alternative to Qwen 3.5 9B on modest GPUs.

Parameters
12B
Minimum RAM
12 GB
Model size
7.5 GB
Quantization
Q4_K_M

Can GLM 4.6 Air (12B) run locally?

GLM 4.6 Air (12B) is best suited for mainstream Macs and PCs with 16 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 12 GB RAM.

Search term for LM Studio or compatible runtimes: glm-4.6-air

Hugging Face repository: THUDM/GLM-4.6-Air-GGUF

chatcodereasoningstandardgeneral

Strengths

  • Zhipu AI lightweight flagship. Strong bilingual CN/EN with hybrid thinking mode, 200K context and tool calling. Apache 2.0 — excellent alternative to Qwen 3.5 9B on modest GPUs.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • chat
  • code
  • reasoning
  • standard
  • general

Benchmarks

Speed: 8/10

Quality: 8/10

Coding: 8/10

Reasoning: 8/10

Technical details

Developer: glm

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2026-02