Guide 15 min read February 12, 2026

OpenClaw: The Complete Guide to the Self-Hosted AI Assistant Gateway

68,000+ GitHub stars. 100% self-hosted. Zero telemetry. OpenClaw is the open-source AI gateway that connects your chat surfaces, CLI, and tools to local or remote model backends — keeping every conversation under your control. Here's everything you need to know, and how to set it up in under 5 minutes.

Quick Facts

  • What it is: Self-hosted AI assistant gateway
  • GitHub Stars: 68,000+
  • Surfaces: Desktop app, CLI, web UI
  • Stack: Electron + React + TypeScript
  • Backends: LM Studio, Ollama, any OpenAI-compatible API
  • License: MIT (fully open-source)

1. What Is OpenClaw?

OpenClaw is a self-hosted AI assistant gateway that connects your chat channels and tools to local or remote model backends (LM Studio, Ollama, any OpenAI-compatible API). Think of it less as "a ChatGPT clone" and more as an orchestration layer — it routes your prompts to whichever inference engine you choose, manages conversations and context, and extends functionality through a skills/plugin system.

Created as a weekend experiment that exploded into a full-blown phenomenon, OpenClaw has amassed over 68,000 stars on GitHub, making it one of the most popular open-source AI projects in the world. It ships with multiple surfaces: a desktop app (Electron), a CLI for terminal workflows, and a web UI you can self-host. Under the hood, it connects to any server that exposes an OpenAI-compatible /v1/chat/completions endpoint — including LM Studio, Ollama, llama.cpp, vLLM, and even remote GPU servers you control.

The core project is built with Electron, React, and TypeScript for the desktop surface, but the gateway architecture means it’s not locked to a single interface. You can interact with the same backend through the CLI, pipe output into scripts, or expose the web UI on your local network for other devices.

💡 The name: OpenClaw started as "Clawdbot" (a play on Claude + bot), then evolved into OpenClaw as it grew beyond a single AI personality into a universal AI gateway and orchestration platform. The claw mascot — a friendly red creature — has become the unofficial emoji of the local AI community.

2. Why OpenClaw Over ChatGPT?

ChatGPT is excellent — but it comes with trade-offs that matter increasingly in 2026. Here's why thousands of developers, researchers, and privacy-conscious users are switching to OpenClaw:

🔒

Complete Privacy

Your conversations never leave your machine. No cloud processing, no data collection, no training on your inputs. Supports air-gapped environments.

💰

Zero Cost

No subscription fees. No per-token pricing. No usage limits. Once your model is downloaded, it's unlimited forever.

No Rate Limits

No "you've reached your limit" messages. No waiting for servers. No downtime. Your AI is always available, as fast as your hardware allows.

🧩

Fully Customizable

Switch models instantly. Create custom system prompts. Install skills and plugins. Tune temperature, context length, and every parameter.

🌐

Works Offline

No internet required after setup. Perfect for flights, restricted networks, or anywhere you need AI without connectivity.

🔓

No Censorship

Local models don't have the content restrictions of commercial APIs. Use uncensored models for creative writing, research, or red-teaming.

3. Key Features

💬 Chat Interface

🔒 Privacy & Security

🛠️ Developer Features

4. How It Works

OpenClaw follows a gateway architecture — it sits between you (via desktop app, CLI, or web UI) and your inference backend. It doesn’t bundle its own model runtime; instead, it acts as an intelligent routing and orchestration layer:

🖥️ How OpenClaw Routes Your Prompts
You
Desktop • CLI • Web UI
OpenClaw
Gateway + Orchestration
LM Studio / Ollama
Local inference server
Your LLM
Qwen 3, DeepSeek, etc.
Everything happens on localhost — no internet traffic whatsoever

The key design principle: OpenClaw doesn’t ship its own inference engine. It delegates model execution to a dedicated backend (LM Studio, Ollama, vLLM, etc.) via the OpenAI-compatible API (http://localhost:1234/v1 for LM Studio, http://localhost:11434 for Ollama). This separation means OpenClaw focuses on what it does best — orchestration, conversation management, skills, and multi-surface access — while the backend focuses on fast inference. You pick the best tool for each role.

5. Installation Guide

OpenClaw offers multiple installation methods depending on your platform. Here's how to get it running in under 5 minutes.

🍎 macOS

Option A: One-line installer (recommended)

curl -fsSL https://openclaw.ai/install.sh | bash

Option B: Homebrew

brew install --cask openclaw

Option C: Direct download — grab the .dmg from openclaw.ai

🐧 Linux

Option A: One-line installer

curl -fsSL https://openclaw.ai/install.sh | bash

Option B: AppImage — download the .AppImage from the releases page. Make it executable and run:

chmod +x OpenClaw-*.AppImage
./OpenClaw-*.AppImage

Option C: Snap / Flatpak — available on both package managers for Ubuntu, Fedora, and other distros.

🪟 Windows

Download the .exe installer from openclaw.ai and run it. That's it — Windows Defender may ask for confirmation the first time.

✅ System Requirements: OpenClaw itself is lightweight — the desktop app is under 200 MB, and the CLI is even smaller. The real requirement is your inference backend — see our How to Choose a Local LLM guide for RAM/VRAM recommendations.

6. Connecting to LM Studio

LM Studio is the most popular backend for OpenClaw. Here's how to connect them:

1

Start LM Studio's Local Server

Open LM Studio → Go to the Local Server tab (left sidebar) → Click "Start Server". The server will launch on http://localhost:1234

2

Open OpenClaw Settings

Launch OpenClaw → Click the ⚙️ gear icon → Navigate to "AI Provider" settings

3

Configure the Endpoint

Set Provider to "OpenAI-Compatible" → Set Base URL to http://localhost:1234/v1 → API Key can be left empty or set to lm-studio

4

Start Chatting

Select a model from the dropdown (OpenClaw auto-detects loaded models) → Type your first message. The response streams from LM Studio through OpenClaw's interface. 🎉

# Verify the connection works (run in terminal):
curl http://localhost:1234/v1/models

# Expected output: JSON list of your loaded models

7. Connecting to Ollama

Ollama is a popular alternative backend. The setup is slightly different:

1

Install & Start Ollama

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model (e.g., Qwen 3 8B)
ollama pull qwen3:8b

# Ollama server starts automatically on port 11434
2

Configure OpenClaw

Settings → AI Provider → Set Base URL to http://localhost:11434/v1 → Leave API Key empty → Select your model from the dropdown

⚠️ Note: Some advanced OpenClaw features (like real-time model loading/unloading) work better with LM Studio. Ollama requires models to be pre-pulled via the CLI. Both backends work well for daily chatting.

8. The Skills System

One of OpenClaw's most powerful features is its skills system — a plugin architecture that extends the gateway beyond simple chat. Skills let OpenClaw orchestrate multi-step tasks, interact with external tools, and automate workflows that a basic prompt-response loop cannot handle.

What are Skills?

Skills are modular add-ons that give OpenClaw new abilities. They can interact with files, browse the web, execute code, and more — all running locally. Think of them as "apps" for your AI assistant.

Popular Skills

📁 File Manager

Read, write, and analyze local files. Great for document summarization and data processing.

🌐 Web Browser

Search the web and read pages without leaving the chat. Real-time information retrieval.

💻 Code Executor

Run Python, JavaScript, and shell scripts in a sandboxed environment. See output inline.

📊 Data Analyst

Process CSV/JSON data, generate charts, run statistical analysis — powered by your local model.

✅ Building Your Own Skills: Skills are written in TypeScript and follow a simple interface. Check the official docs if you want to build custom integrations for your workflow.

9. OpenClaw vs Alternatives

How does OpenClaw compare to other local AI interfaces? Here's an honest breakdown:

Feature OpenClaw LM Studio Chat Ollama CLI ChatGPT
100% Local ✅ Yes ✅ Yes ✅ Yes ❌ Cloud
Open Source ✅ MIT ❌ Proprietary ✅ MIT ❌ Proprietary
Desktop App (GUI) ✅ Polished ✅ Built-in ⚠️ CLI only ✅ Web + App
Skills / Plugins ✅ Extensive ❌ None ❌ None ✅ GPTs
Multi-Backend ✅ Any OpenAI API ⚠️ Own engine only ⚠️ Own engine only ❌ OpenAI only
Free ✅ Forever ✅ Free ✅ Free ⚠️ $20/mo for Plus
Model Management ⚠️ Via backend ✅ Built-in ✅ CLI pull ❌ N/A

The verdict: OpenClaw shines as a universal AI gateway. It doesn't try to be a model manager or inference engine — it excels at being the orchestration layer that routes prompts, manages context, and extends functionality through skills. Pair it with LM Studio (for model discovery + download) and you get the best of both worlds.

10. Best Models to Use with OpenClaw

OpenClaw works with any model your backend can run. Here are our top picks for different use cases and RAM tiers:

Use Case 8 GB RAM 16 GB RAM 32 GB+ RAM
General Chat Qwen 3 8B Qwen 3 14B Qwen 3 32B
Coding Qwen 2.5 Coder 7B DeepSeek Coder V2 16B DeepSeek V3.2
Reasoning DeepSeek R1 8B Phi-4 Reasoning 14B DeepSeek R1 32B
Vision Gemma 3 4B Gemma 3 12B Gemma 3 27B
Bilingual (CN/EN) GLM 4.7 Flash GLM 4.7 Kimi K2.5

👉 Use our model recommender to find the perfect model for your exact hardware, or browse the complete model list with filters and benchmarks.

11. Pro Tips & Best Practices

💡 Use System Prompts Wisely

Create different "personas" with system prompts — a coding assistant, a writing coach, a data analyst. Save them in OpenClaw's preset system for instant switching. Example: You are a senior Python developer. Write clean, well-documented code with type hints. Explain your reasoning.

💡 Context Window Management

Local models have limited context (typically 4K–128K tokens). When conversations get long, start a new session or use the /clear command. Long context eats RAM exponentially — a 32K context uses 4× more memory than 8K.

💡 Keyboard Shortcuts

Ctrl/Cmd + N = new chat, Ctrl/Cmd + Shift + S = settings, Ctrl/Cmd + K = search conversations, Esc = stop generation. Master these and you'll never touch the mouse.

💡 Start Small, Scale Up

If you're new to local AI, start with a small model (Qwen 3 8B, Q5_K_M quantization). Once comfortable, upgrade to 14B or 32B. The quality jump from 8B to 14B is significant, and from 14B to 32B is GPT-4 territory.

12. Frequently Asked Questions

Is OpenClaw safe to install?

Yes. OpenClaw is fully open-source (MIT license) with the entire codebase publicly auditable on GitHub. It has 68,000+ stars and an active community of contributors. The install script only downloads official releases. No code is obfuscated, no data is collected.

Do I need a GPU to use OpenClaw?

OpenClaw itself doesn't need a GPU — it's a gateway/orchestration layer, not an inference engine. Whether you need a GPU depends on your backend. On macOS with Apple Silicon, the unified memory handles everything. On Linux/Windows, a GPU significantly speeds up inference but is not strictly required (CPU-only mode works, just slower).

Can I use OpenClaw with a remote server?

Absolutely. OpenClaw can connect to any OpenAI-compatible endpoint — local or remote. Point it at your cloud GPU server, a Runpod instance, or even a friend's machine via SSH tunnel. Just change the Base URL in settings.

OpenClaw vs the built-in LM Studio chat — which is better?

LM Studio's built-in chat is good for quick testing. OpenClaw is better as a daily driver — it's a full assistant gateway with session management, slash commands, the skills/plugin system, multi-surface access (desktop + CLI + web), and it works with any backend (not just LM Studio). Think of LM Studio as the engine room and OpenClaw as the bridge.

How much disk space does OpenClaw need?

The OpenClaw application itself is under 200 MB. Conversation history is stored locally and typically takes negligible space. The real disk space requirement is for your AI models (managed by LM Studio or Ollama), which range from 2 GB to 40+ GB depending on the model.

Can I use OpenClaw with ChatGPT's API?

Yes — since OpenClaw supports any OpenAI-compatible endpoint, you can point it at OpenAI's API (https://api.openai.com/v1) with your API key. This gives you the OpenClaw interface with GPT-4 as the backend. But the whole point is to go local — try a local model first!

The Bottom Line

OpenClaw is the orchestration layer that ties the local AI stack together. You already have great models (Qwen 3, DeepSeek, Gemma 3) and great inference engines (LM Studio, Ollama). What was missing was a gateway — something that manages conversations, routes prompts to the right backend, extends functionality with skills, and gives you multiple ways to interact (desktop, CLI, web). OpenClaw fills that role.

If you're serious about running AI locally in 2026, the stack is clear: LM Studio + OpenClaw + the right model for your RAM. That's it. No subscription, no cloud dependency, no compromises.