Local LLM model page
DBRX (132B MoE)
Databricks open MoE LLM. 132B total with 36B active. Strong general purpose. 111K downloads.
Parameters
132B (36B active)
Minimum RAM
96 GB
Model size
75 GB
Quantization
Q4_K_M
Can DBRX (132B MoE) run locally?
DBRX (132B MoE) is best suited for large-memory workstations. LocalClaw recommends Q4_K_M as the default quantization, with at least 96 GB RAM.
Search term for LM Studio or compatible runtimes: dbrx-instruct
Hugging Face repository: databricks/dbrx-instruct-GGUF
chatgeneralquality
Strengths
- Databricks open MoE LLM. 132B total with 36B active. Strong general purpose. 111K downloads.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- chat
- general
- quality
Benchmarks
Speed: 2/10
Quality: 9/10
Coding: 8/10
Reasoning: 8/10
Technical details
Developer: dbrx
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2024-03