Local LLM model page

Dolphin 3 (8B)

Uncensored general purpose model with function calling. No content filters. 3.1M downloads.

Parameters
8B
Minimum RAM
8 GB
Model size
4.7 GB
Quantization
Q5_K_M

Can Dolphin 3 (8B) run locally?

Dolphin 3 (8B) is best suited for entry-level laptops and desktops. LocalClaw recommends Q5_K_M as the default quantization, with at least 8 GB RAM.

Search term for LM Studio or compatible runtimes: dolphin3-8b

Hugging Face repository: cognitivecomputations/Dolphin3-8B-GGUF

chatcodestandardgeneral

Strengths

  • Uncensored — no content filters
  • Function calling support
  • Apache 2.0
  • 3.1M downloads

Limitations

  • No safety guardrails
  • English-only
  • Fine-tune — inherits base model limitations

Best use cases

  • Unrestricted chat
  • Creative writing
  • Role-playing
  • Function calling
  • Research

Benchmarks

Speed: 8/10

Quality: 7/10

Coding: 7/10

Reasoning: 6/10

Technical details

Developer: Cognitive Computations

License: Apache 2.0

Context window: 131,072 tokens

Architecture: Transformer (fine-tuned Llama 3.1 8B)

Released: 2025-02