PlugAndClawGet Started →

Private AI Assistant Hosting — Your Data, Your Server

When you use ChatGPT or Claude.ai, your data lives on their servers. With OpenClaw on PlugAndClaw, your data lives on yours.

The Privacy Problem with Consumer AI Services

When you chat with ChatGPT on the web, your conversations are stored on OpenAI's servers. They're potentially used to train future models (unless you opt out — but opt-out controls are buried and not always honored). OpenAI's data retention policies allow them to keep your data for years. The same applies to Claude.ai (Anthropic's web interface), Google's Gemini web interface, and virtually every consumer AI product.

For casual use — asking about recipes, writing help, general questions — this might be acceptable. But when your AI assistant knows your business strategy, your client list, your financial situation, your personal health concerns, or your professional communications, the calculus changes. You're handing intimate knowledge about your life and business to corporations whose privacy practices and data usage policies you can't audit.

OpenClaw, hosted on your own server, solves this at the architecture level. Your conversations never touch a third-party storage system. Your memory files — where OpenClaw stores what it has learned about you, your preferences, your ongoing projects — live on a server you control. You can read them, edit them, delete them, or export them at any time.

The LLM APIs still see your messages in transit (that's unavoidable — the AI model needs to read your text to respond). But API usage is different from web interface usage: Anthropic's API terms explicitly state that API inputs and outputs are not used to train models. You're a customer buying compute, not a product providing training data.

OpenClaw's Memory System and Data Sovereignty

OpenClaw's memory architecture is designed for a self-hosted world. Every piece of persistent state lives in files on your server.

The memory system has three tiers. Daily memory files (memory/YYYY-MM-DD.md) capture what happened in each session — tasks completed, decisions made, information shared. These are raw session logs. The long-term memory file (MEMORY.md) is your assistant's curated knowledge about you — distilled from daily notes over time, updated by the assistant itself. Finally, SOUL.md and AGENTS.md define your assistant's personality, workspace rules, and behavior — they're read at the start of every session.

All of these files exist on your VPS. You can SSH in and read them with cat. You can edit them with nano or vim. You can back them up with rsync or push them to a private Git repository. You can grep through them to find what your assistant knows. This is data sovereignty: you have direct, unrestricted access to every piece of information your assistant holds.

Contrast this with ChatGPT's memory feature: it stores summaries of conversations in OpenAI's database, you can review but not fully audit what's stored, and OpenAI controls the storage and deletion. With OpenClaw, the equivalent of ChatGPT's 'memory' is a text file on your server that you own completely.

When you delete a memory file from your OpenClaw server, it's gone. You don't need to file a data deletion request with a corporation and wait 30 days to see if it was honored.

Technical Privacy Stack on PlugAndClaw

Privacy is only meaningful if it's implemented at the technical level. PlugAndClaw's infrastructure choices reflect that principle.

LUKS2 full-disk encryption: Every PlugAndClaw VPS uses LUKS2 to encrypt the block device before the OS boots. This means your conversation history, memory files, environment variables (including API keys), and all application data are encrypted at rest. The encryption key is tied to your provisioning — PlugAndClaw staff cannot decrypt your data without the key.

SSH with key-based authentication only: Password authentication for SSH is disabled. Only an RSA or ED25519 key you control can log into your server. Brute-force SSH attacks (which start hitting any public IP within minutes) are useless against key-only auth.

UFW firewall with default-deny: The firewall blocks all inbound traffic except ports 22 (SSH), 80 (HTTP — redirected to HTTPS by Caddy), and 443 (HTTPS). OpenClaw's internal ports are never exposed. The Telegram bot webhook is served through Caddy's HTTPS termination.

Caddy with automatic TLS: All traffic between Telegram's servers and your OpenClaw instance is encrypted in transit. Caddy uses Let's Encrypt to provision and auto-renew TLS certificates. HTTP Strict Transport Security (HSTS) headers are set to prevent protocol downgrade attacks.

No PlugAndClaw telemetry: PlugAndClaw doesn't install monitoring agents that phone home with your conversation data. Server health monitoring (CPU, memory, disk) is done with standard OS metrics that don't touch application data.

BYOK: Bringing Your Own API Keys

PlugAndClaw includes $20/month in AI credits via OpenRouter, covering access to Claude Opus 4.6, Sonnet 4.6, Haiku 4.5, GPT-5.2, Gemini 3 Flash, Kimi K2.5, and Minimax M2.5. For most users, this covers typical usage comfortably.

For users who want direct API relationships with providers — for compliance reasons, specific data processing agreements, or higher usage volumes — PlugAndClaw fully supports BYOK (Bring Your Own Keys). Configure your Anthropic API key, OpenAI API key, or Google API key directly in your server's environment, and OpenClaw will use them instead of the shared OpenRouter proxy.

BYOK has a privacy advantage: your API traffic goes directly to the provider under your own account's terms of service and data processing agreements. If your company has a Business Associate Agreement (BAA) with Anthropic for healthcare use, for example, BYOK ensures your OpenClaw traffic falls under that agreement.

BYOK also has a cost advantage if you use large volumes of AI tokens. OpenRouter charges a small markup over provider list prices. Direct API keys at volume often come with negotiated rates. If you're spending more than $50/month on AI tokens, it's worth comparing OpenRouter pricing vs direct provider pricing for your actual usage mix.

OpenClaw Privacy vs ChatGPT Teams and Enterprise

ChatGPT Team and Enterprise plans offer improved privacy over the free tier: they don't use conversations to train models and offer data residency options. These are valid improvements. But they still involve your data living on OpenAI's infrastructure, under OpenAI's policies, with OpenAI's staff theoretically able to access it for trust and safety purposes.

For regulated industries — legal, medical, financial — this is often a dealbreaker. Attorney-client privilege, HIPAA, and financial data regulations impose strict requirements on where data lives and who can access it. Telling a regulator that your confidential client data is stored on ChatGPT Enterprise is a difficult conversation.

OpenClaw on your own server avoids this entirely. Your data never leaves a server you control. If a regulator asks where your data lives, the answer is: 'On our server, encrypted with LUKS2, in a Hetzner data center in Germany under GDPR jurisdiction.' That's a much cleaner answer.

The tradeoff is capability and convenience. ChatGPT Enterprise has more polished team collaboration features, a web interface accessible from any browser, and no setup required. OpenClaw requires more technical comfort but delivers genuine data sovereignty. For privacy-conscious professionals and businesses, that tradeoff is worth making.

PlugAndClaw makes the OpenClaw path easier by handling the technical setup. You get private AI hosting without needing to become a Linux administrator.

Frequently Asked Questions

Who can see my OpenClaw conversations?

On PlugAndClaw, your conversations are stored only on your dedicated VPS. PlugAndClaw staff cannot access your conversation history, memory files, or SOUL.md without your explicit permission (e.g., for support). The LLM APIs (Anthropic, OpenAI, Google) process your messages in transit to generate responses, but do not store conversation history by default for API users — unlike web interfaces like ChatGPT or Claude.ai where conversations train future models.

Does OpenClaw store my data on third-party servers?

OpenClaw itself stores everything on your server. Your memory files (MEMORY.md, daily notes), SOUL.md, AGENTS.md, and any files your assistant creates all live on your VPS. The only external data flow is: (1) your messages to the LLM API for generating responses, and (2) Telegram messages through Telegram's infrastructure. Both are encrypted in transit. No third-party service has persistent access to your data.

How does LUKS2 encryption protect my data?

LUKS2 (Linux Unified Key Setup version 2) encrypts the entire block device — the virtual disk of your VPS. All data written to disk, including OpenClaw memory files, configuration, and API keys stored in environment files, is encrypted before it hits storage. The encryption key is derived from a passphrase set during provisioning. Even if someone extracted the physical disk image from Hetzner's hardware, they'd see only encrypted data.

Is it safe to store sensitive work information in OpenClaw's memory?

With proper setup (LUKS2 encryption, key-based SSH auth, UFW firewall), OpenClaw's memory system is as secure as any properly configured Linux server. For highly sensitive professional information — legal, medical, financial — use BYOK (bring your own API keys) to ensure your messages are governed by your direct API contract with the provider, review the relevant provider's API data handling policy, and consider additional hardening like audit logging.

Your AI assistant. Live in under 1 minute.

Get Private AI Hosting

$39.50/month · 7-day money-back guarantee · Cancel anytime