Back to Blog

Install OpenClaw on Windows (WSL2) — Step-by-Step

AIAutomation

Install OpenClaw (formerly ClawdBot) on Windows 10/11 using WSL2 + Ubuntu. Includes real fixes for common issues (node/npm PATH, port conflicts, Telegram pairing).

Summarize this post with

I have an old gaming PC sitting under my desk. It hasn't seen a game in years, but it's perfect for something better: running my own AI assistant.

OpenClaw (formerly ClawdBot) is a self-hosted AI assistant that connects to your messaging apps, executes shell commands, manages files, and maintains context across conversations.

OpenClaw’s homepage advertises a cross-platform install script. On Windows, you can use the native PowerShell installer, but the official OpenClaw docs still recommend WSL2 and explicitly warn that native Windows is "untested, more problematic, and has poorer tool compatibility." So this guide goes the WSL2 route.

Wanna jump right into the instructions? Skip to Step 1.

This guide walks through the complete setup on Windows via WSL2. It includes the annoying stuff I actually ran into (PATH issues, port conflicts, and "why is npm acting weird" moments) and ends with a short troubleshooting section.

Why Self-Hosting is a Good Option

To be clear about what "self-hosted" means here: in this guide, we're still using cloud AI providers, but OpenClaw also supports local models via Ollama if you want fully private inference. Besides the AI model you use, what runs on your machine is the orchestration layer — OpenClaw manages your conversations, workspace files, tool execution, and messaging integrations locally.

So why bother? A few reasons:

Full system integration. This isn't a chatbot in a browser tab. Your AI lives inside your workspace: it can execute shell commands, read and write local files, manage git repos, set up timely check-ins via heartbeats, browse the web, and send messages through Telegram, WhatsApp, Discord, etc.

Persistent memory. Web interfaces forget context between sessions. OpenClaw maintains state through workspace files — it remembers your projects, preferences, decisions, and ongoing work. That persistence compounds over time.

Your conversations and workspace stay local. While the AI model calls go to the cloud (just like any AI chat interface), your conversation history, workspace files, memory, and any local data the assistant works with are stored on your machine.

Always-on availability. Connected to Telegram or WhatsApp (or other channels), your assistant is reachable from your phone anytime.

The trade-off is that you're responsible for uptime. For me, that's fine. My PC runs 24/7 anyway since I also run N8N on it.

Local PC vs. VPS (quick take)

VPS makes sense if you need guaranteed uptime from anywhere.

Just keep in mind: a lot of “$5/month droplet” tutorials are written to sell hosting. VPS is a legit option — but if you already have an old PC at home, it’s also a perfectly valid (and cheaper) setup.

If you have spare hardware:

  • Zero ongoing compute cost — just your existing AI subscription (Claude/ChatGPT/etc.) or local models
  • WSL2 gives you a sandboxed Linux environment — isolated from your Windows files by default
  • No surprise cloud bills — it’s your hardware, running on your electricity

Security & Privacy

Running an AI with shell access is what the OpenClaw docs candidly call "spicy." Here's how the security model works:

WSL2 sandboxing — your AI runs in an isolated Linux environment. It can’t access Windows files unless you explicitly mount them.

Pairing system — nobody can message your bot without your explicit approval. Unknown senders get a pairing code that you must manually approve via CLI.

Loopback binding — the gateway listens on 127.0.0.1 by default (not exposed to your network).

Gateway auth token — even local connections require a token generated during setup.

What the AI CAN access — within WSL2, the AI has the same permissions as your Linux user. It can run shell commands, read/write files in its workspace, and make network requests.

Dedicated PC isolation — I also use a machine that doesn’t have my sensitive accounts logged in. I gave OpenClaw its own accounts (GitHub, Gmail) and only share what it needs.

Built-in audit — run openclaw security audit --deep after setup and periodically afterward. Full security docs: https://docs.openclaw.ai/gateway/security

Prerequisites

  • A Windows 10/11 PC with WSL2 support
  • An AI model subscription (Claude Pro/Max, ChatGPT Plus) or an API key
  • (Optional) local LLMs via Ollama if you want fully private inference
  • A Telegram account (easiest channel to start with)
  • ~30 minutes

Prompt injection is a real risk with AI assistants that have system access, so smarter models like Claude Opus 4.5 or GPT 5.2 are recommended for better instruction following and safety.

Step 1: Install WSL2 + Ubuntu

Open PowerShell as Administrator:

wsl --install

Create a Linux username and password when prompted. After installation, you'll land in an Ubuntu shell that looks something like this:

username@G-PC:/mnt/c/WINDOWS/system32$

You’re in Linux now (not PowerShell). Exit back to PowerShell anytime with exit. Reopen Ubuntu with wsl. You can do wsl --shutdown to close all WSL2 instances.

Step 2: Disable Windows PATH Injection (prevents weird Node/npm behavior)

Windows injects its own PATH into WSL by default. In practice this can cause confusing Node.js/npm issues.

In my case, before I disabled this, node -v in Ubuntu resolved to a Windows-installed Node path and npm behavior got unpredictable.

From Ubuntu:

sudo tee /etc/wsl.conf > /dev/null <<'EOF'
[interop]
appendWindowsPath=false
EOF

Exit Ubuntu (exit), then from PowerShell:

wsl --shutdown

Reopen Ubuntu (wsl) and verify:

echo $PATH | tr ':' '\n' | grep /mnt/c

No output means you're clean.

Step 3: Install OpenClaw

From your Ubuntu shell:

curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash

Say Yes to all permission prompts (tighten permissions, create session store, create credentials dir, install gateway service).

Verify:

openclaw --version

Step 4: Run the Onboarding Wizard

openclaw onboard

Recommended secure defaults:

Prompt Selection
Gateway Local
Workspace Default
Bind Loopback (127.0.0.1)
Auth Token (recommended)
Tailscale Off

You can change any of them later through openclaw config.

Step 5: Set Up Claude/OpenAI Auth

Prompt injection is a real risk with AI assistants that have system access, so I recommend using a smarter model.

My top choices:

  • anthropic/claude-opus-4-5
  • openai-codex/gpt-5.2

In this guide, let’s go with Claude Opus 4.5.

For Anthropic auth, I chose "Anthropic token (paste setup-token)". This uses your existing Claude Pro or Max subscription — no additional API costs.

You can choose "API key" if you specifically want pay-per-use billing, and be aware that API usage can burn through your balance quickly with an always-on assistant. With API keys, providers won’t rate-limit you — they’ll happily consume your full balance.

On the other hand, I’ve also heard instances of people getting flagged for heavy automated usage on subscription plans.

There’s no perfect option, just trade-offs. Pick what you’re comfortable with.

Step 6: Connect Telegram

You need a bot token from @BotFather:

  1. Open Telegram, search @BotFather
  2. Send /newbot, follow the naming prompts
  3. Copy the bot token
  4. Paste it into the OpenClaw wizard

Step 7: Start the Gateway

If you went through the onboarding wizard and didn’t exit during the process, your Gateway may already be running.

If it isn’t, start it with:

openclaw gateway

Healthy output shows it listening on ws://127.0.0.1:18789.

Open the Control UI: http://127.0.0.1:18789/

Step 8: Pair Your First Message

Open Telegram and message your bot:

/start

The bot responds with a pairing code. In a new Ubuntu terminal:

openclaw pairing approve telegram <PAIR_CODE>

Send another message. You should get a response.

What Your AI Can Actually Do (real examples)

Once running, OpenClaw is dramatically more capable than web AI interfaces because it has tools + persistent state + messaging — not just a coding agent in a terminal.

In my first 2 days of using it, these were the biggest "this is different" moments:

  1. Proactive check-ins (heartbeats) that keep work moving while you sleep

I asked it for proactive output overnight (midnight–8am) and daytime check-ins, and it actually follows through.

A tiny example prompt that worked well for me:

"Every day at 8am/12pm/3pm/6pm/9pm: check my active tickets + top priorities and message me what matters. Overnight: pick one high-leverage task and ship a concrete artifact (PR, doc, ticket updates)."

  1. Page-by-page SEO optimization (not just advice)

While I was busy, it proactively started optimizing the Onchainsite blog page by page:

  • pulled our sitemap and built a parent ticket with every post
  • did keyword + performance checks (DataForSEO + Google Search Console)
  • rewrote posts to follow Google’s helpful content guidelines
  • opened PRs with the edits
  • kept the Linear ticket updated with PR links + checkboxes
  1. It opens PRs instead of giving copy-paste suggestions

It can make changes in your repo, run checks, and open a PR for review — which is the difference between "ideas" and "shipping."

  1. It writes the annoying one-off scripts that unblock work

Quick scripts for data pulls, audits, formatting fixes, and ticket updates — the kind of work most people avoid because it’s tedious.

  1. It can act like a real PM/architect, not just a coder

I’ve had it draft PRDs and break complex features into sub-tickets with clear phases, acceptance criteria, and links back to implementation work.

That’s the leap: less “chatbot”, more “teammate that can actually ship.”

Next Steps (things most guides don’t mention)

Join the OpenClaw community. The fastest way to learn is reading other setups and asking questions. Discord: https://discord.gg/clawd

Send your agent to Moltbook (agent social network). It’s a community where agents post, comment, and share skills:

Give your agent a wallet (carefully). If you’re in crypto, you can install skill packs like BankrBot’s Moltbot/OpenClaw skills: https://github.com/BankrBot/moltbot-skills

If you go down this route, treat it like giving a junior teammate a hot wallet: start with tiny balances, strict allowlists, and lots of logging.

Remote access via Tailscale. Once you’re confident in your setup, Tailscale gives secure remote access without port forwarding: https://docs.openclaw.ai/gateway/remote

Configure heartbeats. Heartbeats are periodic check-ins where your AI proactively looks at what matters and messages you only when something needs attention.

Troubleshooting (save yourself 30 minutes)

If something doesn’t work, here are the fastest checks:

  • Gateway won’t start (port conflict):

    sudo lsof -i :18789
  • You’re accidentally running in PowerShell (Windows), not Ubuntu (WSL2): Make sure your prompt looks like username@...:~$ and not PS C:\....

  • Node/npm weirdness in WSL: Re-check Step 2 (PATH injection). Then fully restart WSL:

    wsl --shutdown
  • Control UI doesn’t open: Confirm the gateway is running and listening on loopback:

    openclaw gateway

Resources: