Local LLM model page
Llama 3.2 Vision (90B)
Meta's largest vision model. 128K context with powerful image reasoning and analysis. Requires significant hardware.
Parameters
90B
Minimum RAM
72 GB
Model size
55 GB
Quantization
Q4_K_M
Can Llama 3.2 Vision (90B) run locally?
Llama 3.2 Vision (90B) is best suited for large-memory workstations. LocalClaw recommends Q4_K_M as the default quantization, with at least 72 GB RAM.
Search term for LM Studio or compatible runtimes: llama-3.2-90b-vision-instruct
Hugging Face repository: meta-llama/Llama-3.2-90B-Vision-Instruct-GGUF
visionquality
Strengths
- Meta's largest vision model. 128K context with powerful image reasoning and analysis. Requires significant hardware.
Limitations
- Performance depends heavily on quantization, RAM bandwidth and runtime support.
Best use cases
- vision
- quality
Benchmarks
Speed: 1/10
Quality: 10/10
Coding: 7/10
Reasoning: 9/10
Technical details
Developer: llama
License: See model repository
Context window: Unknown tokens
Architecture: See model card
Released: 2024-09