Local LLM model page

InternLM 2.5 (7B)

Practical reasoning model from Shanghai AI Lab. Strong bilingual. 101K downloads.

Parameters
7B
Minimum RAM
8 GB
Model size
4.5 GB
Quantization
Q5_K_M

Can InternLM 2.5 (7B) run locally?

InternLM 2.5 (7B) is best suited for entry-level laptops and desktops. LocalClaw recommends Q5_K_M as the default quantization, with at least 8 GB RAM.

Search term for LM Studio or compatible runtimes: internlm2_5-7b-chat

Hugging Face repository: internlm/internlm2_5-7b-chat-GGUF

chatreasoningstandard

Strengths

  • Practical reasoning model from Shanghai AI Lab. Strong bilingual. 101K downloads.

Limitations

  • Performance depends heavily on quantization, RAM bandwidth and runtime support.

Best use cases

  • chat
  • reasoning
  • standard

Benchmarks

Speed: 8/10

Quality: 7/10

Coding: 6/10

Reasoning: 7/10

Technical details

Developer: internlm

License: See model repository

Context window: Unknown tokens

Architecture: See model card

Released: 2024-07