I have an old gaming PC sitting under my desk. It hasn't seen a game in years, but it's perfect for something better: running my own AI assistant.
Moltbot (formerly Clawdbot) is a self-hosted AI assistant that connects to your messaging apps, executes shell commands, manages files, and maintains context across conversations. The official docs recommend macOS as the primary platform, but it runs perfectly fine on Windows through WSL2 — and if you have an old PC gathering dust, that's all you need.
This guide walks through the complete setup on Windows. I've been running this daily and documented every step, including the gotchas that aren't obvious from the docs alone.
Already familiar with Moltbot? Jump to the setup instructions.
Why Self-Hosting is a Good Option
To be clear about what "self-hosted" means here: In this guide, we're using cloud AI providers, but Moltbot also supports local models via Ollama if you want fully private inference. Besides the AI model you use, what runs on your machine is the orchestration layer — Moltbot manages your conversations, workspace files, tool execution, and messaging integrations locally.
So why bother? A few reasons:
Full system integration. This isn't a chatbot in a browser tab. Your AI can execute shell commands, read and write local files, manage git repos, setup timely checkins via heartbeats, take proactive action based on your preferences and instructions, browse the web, and send messages through Telegram, WhatsApp, Discord, etc. It's a proper assistant that can work 24/7 with actual capabilities.
Persistent memory. Web interfaces forget context between sessions. Moltbot maintains state through workspace files — it remembers your projects, preferences, decisions, and ongoing work. That persistence compounds over time.
Your conversations and workspace stay local. While the AI model calls go to the cloud (just like any AI chat interface), your conversation history, workspace files, memory, and any local data the assistant works with are stored on your machine — not on someone else's cloud server.
Always-on availability. Connected to Telegram or WhatsApp, or other messaging apps, your assistant is reachable from your phone anytime. It can even proactively check things and message you when something needs attention.
The trade-off is that you're responsible for uptime. If your PC is off, your assistant is off. For me, that's fine. My PC runs 24/7 anyway since I also run N8N on it, and if I need to reboot for maintenance, my AI can wait ten minutes.
Local PC vs. VPS
You'll find a lot of guides recommending VPS setups (DigitalOcean, Hetzner, AWS). VPS makes sense if you need guaranteed uptime from anywhere or don't have a spare machine.
Just know that if you see an affiliate link in the description for a $5-20/month plan, understand the economics at play. A blog post that links to DigitalOcean with a referral code earns passive income for as long as you keep paying. This creates a massive incentive to recommend VPS hosting even when it's not the best option. I'm not saying everyone doing this is acting in bad faith. VPS does have legitimate advantages. But when you see ten guides in a row all recommending the same hosting provider with nearly identical setups, understand the economic incentive behind that recommendation.
AWS free tier is another option, but unless you're experienced with setting up cost alerts and understanding the pricing model, that 'free' tier can be a minefield. I've seen too many developers get surprised bills to recommend it casually.
But if you have an old PC at home, it's a perfectly valid option:
- Zero ongoing compute cost — just your existing AI subscription ($20/mo Claude Pro or API usage) or local model usage (Ollama, etc.)
- WSL2 gives you a proper sandboxed Linux environment — isolated from your Windows files by default
- No surprise bills — it's your hardware, running on your electricity
- Good enough uptime — if your PC runs throughout the day, your assistant is available throughout the day
You don't need powerful hardware. No GPU required. The AI processing happens in the cloud — your PC just orchestrates requests and stores state. An 8-year-old machine with 8GB RAM handles this fine.
Security & Privacy
Running an AI with shell access is what the Moltbot docs candidly call "spicy." Here's how the security model works:
WSL2 sandboxing — your AI runs in an isolated Linux environment. It can't access Windows files unless you explicitly mount them. Think of it as a dedicated Linux machine inside your PC.
Pairing system — nobody can message your bot without your explicit approval. Unknown senders get a pairing code that you must manually approve via CLI. No approval, no access.
Loopback binding — the gateway listens on 127.0.0.1 by default. Not exposed to your network, not exposed to the internet.
Gateway auth token — even local connections require a token generated during setup.
What the AI CAN access — be honest with yourself here: within WSL2, the AI has the same permissions as your Linux user. It can run shell commands, read/write files in its workspace, and make network requests. The containment is WSL2 isolation + the pairing system + loopback binding. That's solid, but it's not zero trust.
Dedicated PC isolation — Beyond WSL2 sandboxing, I'm also using a PC that doesn't have any of my sensitive accounts logged in. No personal Gmail, nothing. I gave Moltbot its own Gmail account, its own GitHub account, and only share things it needs as we go. This is what I recommend to everyone.
Built-in audit — run moltbot security audit --deep after setup and periodically afterward. It catches common misconfigurations. Full security docs: docs.molt.bot/gateway/security
Prerequisites
- A Windows 10/11 PC with WSL2 support (most installations from the last few years)
- An AI model subscription: Claude Pro ($20/mo) or Max ($100/mo), or ChatGPT Plus ($20/mo), or an API key (Anthropic, OpenAI, etc.). Prompt injection is a real risk with AI assistants that have system access, so smarter models like Claude Opus 4.5 or GPT 5.2 are recommended for better instruction following and safety.
- A Telegram account (easiest channel to start with — you can add WhatsApp/Discord later). For WhatsApp, getting its own SIM card and phone number is recommended so you're not sharing your personal WhatsApp session.
- ~30 minutes
Step 1: Install WSL2 + Ubuntu
Open PowerShell as Administrator:
wsl --installCreate a Linux username and password when prompted. After installation, you'll land in an Ubuntu shell that looks something like this:
username@G-PC:/mnt/c/WINDOWS/system32$This means you're in Linux now, not PowerShell. The /mnt/c/ shows your Windows drive is mounted, but you're operating from a Linux shell. Exit back to PowerShell anytime with exit. Reopen Ubuntu with wsl. You can do wsl --shutdown to close all WSL2 instances.
Step 2: Disable Windows PATH Injection
This prevents hours of debugging later. Windows injects its own PATH into WSL by default, which causes Node.js conflicts.
From Ubuntu:
sudo tee /etc/wsl.conf > /dev/null <<'EOF'
[interop]
appendWindowsPath=false
EOFExit Ubuntu (exit), then from PowerShell:
wsl --shutdownReopen Ubuntu (wsl) and verify:
echo $PATH | tr ':' '\n' | grep /mnt/cNo output means you're clean. This step is critical — skip it and you'll get bizarre npm errors later.
Step 3: Install Moltbot
From Ubuntu (never from PowerShell):
curl -fsSL https://molt.bot/install.sh | bashSay Yes to all permission prompts (tighten ~/.moltbot permissions, create session store, create credentials dir, install gateway service).
Verify:
moltbot --versionStep 4: Run the Onboarding Wizard
moltbot onboard --install-daemonChoose these options:
| Prompt | Selection |
|---|---|
| Gateway | Local |
| Workspace | Default |
| Bind | Loopback (127.0.0.1) |
| Auth | Token (recommended) |
| Tailscale | Off |
These prioritize security. You can change any of them later through moltbot config.
Step 5: Set Up Claude Auth
When the wizard asks for model selection, choose anthropic/claude-sonnet-4-5 or anthropic/claude-opus-4-5 (recommended) if you want the strongest model. If you prefer OpenAI, select that option and provide your API key instead / Login with ChatGPT. The setup flow is similar.
For Anthropic auth, I chose "Anthropic token (paste setup-token)". This uses your existing Claude Pro or Max subscription — no additional API costs. You can choose "API key" if you specifically want pay-per-use billing, and be aware that API usage can burn through your balance quickly with an always-on assistant. With API keys, providers won't rate-limit you — they'll consume your full balance. On the other hand, I've heard instances of people getting flagged for heavy automated usage on subscription plans. There's no perfect option, just trade-offs. Pick what you're comfortable with.
Copy the token, paste it into the wizard.
Step 6: Connect Telegram
You need a bot token from @BotFather on Telegram:
- Open Telegram, search for @BotFather
- Send
/newbot, follow the naming prompts - Copy the bot token BotFather gives you
- Paste it into the Moltbot wizard
When asked "Configure DM access policies now?" → Yes. Use the recommended pairing mode.
Step 7: Start the Gateway
moltbot gatewayHealthy output shows: listening on ws://127.0.0.1:18789, model configured, providers starting.
Open the Control UI in your browser: http://127.0.0.1:18789/
Note: The gateway stops when you close the terminal when you start it this way.
Step 8: Pair Your First Message
Open Telegram and message your bot by tapping on the username BotFather gives you.
/startThe bot responds with a pairing code. In a new Ubuntu terminal:
moltbot pairing approve telegram <PAIR_CODE>Send another message. It should respond. You're live.
Gotchas From Real Experience
These are the mistakes that waste time:
- Moltbot is recommended to run in Linux. The docs say: 'WSL2 is strongly recommended; native Windows is untested, more problematic, and has poorer tool compatibility.'
- Never run Moltbot commands from PowerShell. Always Ubuntu. PowerShell is Windows; Moltbot runs in Linux.
- Be careful with auth choice. This guide recommends the setup-token approach (Step 5) because it uses your existing Claude subscription. With API keys, providers won't rate-limit you — they'll consume your full balance. On the other hand, I've heard instances of people getting flagged for heavy automated usage on subscription plans. There's no perfect option, just trade-offs. Pick what you're comfortable with.
- Loopback = no remote access. That's intentional. Add Tailscale later if you need mobile access.
- WSL IP changes on reboot. Doesn't matter for basic setup, but relevant if you do advanced networking later.
What Your AI Can Actually Do
Once running, Moltbot is dramatically more capable than web AI interfaces. Here's what changes:
Proactive execution. Your AI can monitor your inbox and surface what's urgent. It can audit your SEO data and open a PR with fixes before you even notice the problem. Mine reviewed 74 project tickets overnight, categorized them by launch priority, and had a full restructure ready for me to review by morning. This isn't you asking questions — it's your assistant working while you sleep.
Shell command execution. Ask it to check disk space, restart a service, pull the latest code from a repo, or run a build script. It executes commands in its workspace just like you would in a terminal. The difference from a web chatbot giving you commands to copy-paste is night and day — it just does the work.
File operations. It can read logs, edit configuration files, search through codebases, organize directories, or write code directly into files. Need a script written and saved? Done. Want to update a config across multiple files? Done. This is full filesystem access within the workspace, not simulated file handling.
Web research and browsing. Moltbot can control an actual browser — navigate sites, fill forms, extract data, take screenshots. Combine that with shell access and you can automate workflows like "scrape this site, process the data with a Python script, and send me the results."
Calendar integration. Connect your Google Calendar and your AI knows your schedule. It can remind you about upcoming events, suggest optimal meeting times, or even block time for focus work. The context compounds: it remembers your work patterns and projects, so reminders are relevant.
Coding assistance with context. It has access to your actual project files. That means code reviews that see the full context, refactoring suggestions based on your existing architecture, and debugging with real logs and error output. It's not guessing about your setup — it's reading your repo.
Persistent memory across sessions. Web interfaces forget everything when you close the tab. Moltbot maintains workspace files — memory notes, project context, preferences, ongoing work. You don't re-explain who you are or what you're working on. Context compounds over weeks and months.
Extensible via skills. The ClawdHub community shares plugins that add specific capabilities: smart home control, API integrations, workflow automation, data processing. Install what you need, ignore the rest. Your assistant grows more capable over time without code changes.
The combination of system access + messaging integration + persistent memory is what makes this fundamentally different from chatbot interfaces. It's less "ask questions, get answers" and more "delegate work to a capable assistant who remembers everything."
Next Steps
WhatsApp integration. Telegram works great for getting started, but WhatsApp is where most people actually live. Setup is straightforward — you scan a QR code to link your WhatsApp account, and your AI becomes reachable through normal WhatsApp messages. The pairing system works the same way. One important note: consider getting a dedicated SIM card for this. Sharing your personal WhatsApp session with your AI means it has access to all your chats and contacts. A separate number keeps things cleanly isolated. Full guide: docs.molt.bot/channels/whatsapp
Discord integration. If you hang out in Discord servers or use it for work, your AI can join too. Setup is similar to Telegram — create a bot, get a token, configure pairing. Useful if you want your assistant available in team channels or project servers. docs.molt.bot/channels/discord
Remote access via Tailscale. The loopback binding we used during setup means your AI is only accessible from the same machine. That's secure, but limiting. Tailscale gives you secure remote access — your phone can reach your home PC's AI over an encrypted mesh network, no port forwarding needed. Think of it as a private VPN between your devices. Once configured, your assistant works from anywhere without exposing anything to the public internet. docs.molt.bot/gateway/remote
Install skills from ClawdHub. Skills are community-built plugins that extend what your AI can do — smart home control, specific API integrations, workflow automation tools. Browse ClawdHub and install what fits your needs. The modularity means you don't bloat your setup with capabilities you'll never use.
Configure heartbeats. Heartbeats are periodic check-ins where your AI proactively looks at things that matter to you — email, calendar, system status, package tracking — and messages you when something needs attention. This is where "assistant" becomes more than a chatbot. Instead of you remembering to check things, your AI monitors them and interrupts you when relevant. The configuration lives in your workspace files, so you control what gets checked and how often.
Run as a persistent service. Right now, the gateway stops when you close the terminal. That's fine for testing, but not for daily use. If your WSL2 supports systemd, you can configure the gateway to start automatically and run in the background. Alternatively, use screen or tmux to keep the session alive. For a PC that runs 24/7, you want this set up so your AI is always available without manual intervention.
Resources: