Local LLM model page

GLM 4.6 (355B MoE)

Zhipu AI flagship — full GLM 4.6. 200K context, strong tool-calling & agentic workflows. Competes with Claude 3.5 Sonnet on reasoning and code. MIT licensed. Server-grade hardware.

Parameters
355B (32B active)
Minimum RAM
320 GB
Model size
200 GB
Quantization
Q4_K_M

Can GLM 4.6 (355B MoE) run locally?

GLM 4.6 (355B MoE) is best suited for server-grade or multi-GPU systems. LocalClaw recommends Q4_K_M as the default quantization, with at least 320 GB RAM.

Search term for LM Studio or compatible runtimes: glm-4.6-355b

Hugging Face repository: zai-org/GLM-4.6

chatcodereasoningqualitygeneral

Strengths

  • Zhipu AI flagship — full GLM 4.6. 200K context, strong tool-calling & agentic workflows. Competes with Claude 3.5 Sonnet on reasoning and code. MIT licensed. Server-grade hardware.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • chat
  • code
  • reasoning
  • quality
  • general

Benchmarks

Speed: 2/10

Quality: 10/10

Coding: 10/10

Reasoning: 10/10

Technical details

Developer: glm

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2025-09