Local LLM model page

Llama-3.3-Nemotron-Super (49B)

NVIDIA's super-efficient 49B distilled from DeepSeek-R1 + Llama. Outperforms Llama-3.3-70B at half the compute. Strong reasoning, coding & instruction following. Runs on Mac Studio 64GB. NVIDIA Open Model License.

Parameters
49B
Minimum RAM
40 GB
Model size
30 GB
Quantization
Q4_K_M

Can Llama-3.3-Nemotron-Super (49B) run locally?

Llama-3.3-Nemotron-Super (49B) is best suited for high-end workstations with 64 GB RAM. LocalClaw recommends Q4_K_M as the default quantization, with at least 40 GB RAM.

Search term for LM Studio or compatible runtimes: llama-3.3-nemotron-super-49b-v1

Hugging Face repository: nvidia/Llama-3.3-Nemotron-Super-49B-v1-GGUF

chatreasoningcodepowerquality

Strengths

  • NVIDIA's super-efficient 49B distilled from DeepSeek-R1 + Llama. Outperforms Llama-3.3-70B at half the compute. Strong reasoning, coding & instruction following. Runs on Mac Studio 64GB. NVIDIA Open Model License.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • chat
  • reasoning
  • code
  • power
  • quality

Benchmarks

Speed: 4/10

Quality: 9/10

Coding: 9/10

Reasoning: 9/10

Technical details

Developer: nemotron

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2025-02