Local LLM model page

EXAONE Deep (32B)

LG AI Research large reasoning model. Exceptional math and coding. 200K downloads.

Parameters
32B
Minimum RAM
24 GB
Model size
19 GB
Quantization
Q4_K_M

Can EXAONE Deep (32B) run locally?

EXAONE Deep (32B) is best suited for power-user machines with 32 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 24 GB RAM.

Search term for LM Studio or compatible runtimes: exaone-deep-32b

Hugging Face repository: LGAI-EXAONE/EXAONE-Deep-32B-GGUF

reasoningpowerquality

Strengths

  • LG AI Research large reasoning model. Exceptional math and coding. 200K downloads.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • reasoning
  • power
  • quality

Benchmarks

Speed: 4/10

Quality: 9/10

Coding: 8/10

Reasoning: 10/10

Technical details

Developer: exaone

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2025-02