Haven't installed OpenClaw yet? Click here for one-line install commands
curl -fsSL https://openclaw.ai/install.sh | bash
iwr -useb https://openclaw.ai/install.ps1 | iex
curl -fsSL https://openclaw.ai/install.cmd -o install.cmd && install.cmd && del install.cmd
Worried about affecting your computer? ClawTank runs in the cloud with no installation required, eliminating the risk of accidental deletions
Key Findings
  • OpenClaw is fundamentally an agent program with full control over your computer — it can read any file, execute any command, and access any network service. Security is not optional; it is a prerequisite[1]
  • The CVE-2026-25253 vulnerability disclosed in February 2026 confirmed that a misconfigured OpenClaw instance can be taken over remotely with a single click, enabling arbitrary code execution[3]
  • Docker sandboxing provides the strongest isolation — completely separating the agent's execution environment from the host system, so even if the agent behaves abnormally, it won't affect your main machine[4]
  • Skill supply chain attacks are currently the most underestimated risk vector — any skill.md can contain malicious instruction templates that induce the LLM to execute dangerous operations[1]

1. The Risk Landscape: Why AI Agents Need Special Treatment

Traditional AI applications (such as the ChatGPT web interface) run in cloud sandboxes — even if the AI produces problematic output, it cannot actually "do" anything. But OpenClaw is different — it is installed on your computer, runs under your user identity, and has the same system privileges as you.[8]

This means every decision the AI agent makes can have real-world consequences: deleting files, modifying configurations, sending network requests, installing software. CrowdStrike described this model in their analysis report as an "LLM-driven Remote Access Trojan (RAT)" — not because OpenClaw itself is malware, but because its capability model is nearly identical to a RAT at the technical level.[1]

2. OpenClaw's Security Mechanisms

2.1 Gateway Token Authentication

The Gateway Token is OpenClaw's first line of defense. All requests connecting to the Gateway — whether from the CLI, Web UI, or messaging channels — must carry a valid Token.[5]

Token management:

# Generate a new Token
openclaw doctor --generate-gateway-token

# Token storage location
~/.openclaw/openclaw.json -> gateway.auth.token

Key principle: The Token must remain confidential. Anyone who obtains the Token can fully control your agent.

2.2 Pairing Mechanism

Messaging channels (Telegram, WhatsApp, etc.) use an additional Pairing mechanism. Even if someone knows your Bot's username, they cannot issue any commands without completing the pairing process.[4]

2.3 DM Policy and Group Policy

Through DM Policy and Group Policy, you can precisely control which users can interact with the agent. The most secure setting is owner_only — only the paired owner can use the agent.

2.4 Docker Sandbox

Docker sandboxing is currently the strongest isolation solution. When running OpenClaw inside Docker:[4]

# Launch OpenClaw in Docker mode (recommended for production environments)
docker compose up -d

3. Six Major Threat Vectors

3.1 Prompt Injection

This is a universal risk for all LLM applications, but the harm is amplified in agent scenarios.[7] Attackers can embed malicious instructions in web content, emails, or documents, and when the agent reads these contents, it may be tricked into executing dangerous operations.

Mitigation: Stay vigilant about external content the agent reads. Avoid letting the agent automatically process content from untrusted sources.

3.2 Skill Supply Chain Attacks

OpenClaw Skills are plain-text skill.md files that anyone can write and publish. A malicious Skill can inject instructions into the system prompt, causing the agent to execute dangerous actions during seemingly normal operations.[1]

Mitigation: Only install Skills from the official ClawhHub; always read the full contents of skill.md before installation.

3.3 Gateway Exposure

Setting the Gateway to remote mode without configuring a Token or TLS is equivalent to opening a service on the public internet that can remotely execute arbitrary commands. CVE-2026-25253 exploited exactly this configuration flaw.[3]

Mitigation: Remote mode must be paired with Token + TLS; use firewalls to restrict the accessible IP range.

3.4 Credential Leakage

The agent may encounter API keys, database passwords, and other sensitive information during execution, leaving traces in conversation logs or system logs.

Mitigation: Use environment variables instead of files to store secrets; regularly clean conversation history and logs.

3.5 Model Overstepping

The LLM may, due to training data biases or instruction misinterpretation, perform actions beyond your intent — for example, you ask it to "clean up temp files" and it deletes important configuration files.

Mitigation: Test each new task type in a sandbox environment before production use.

3.6 Data Exfiltration

The agent sends your file contents to LLM providers for inference processing. Sensitive business data, source code, or personal information may leave your control as a result.[2]

Mitigation: Use local models (such as Ollama) for sensitive data; confirm your LLM provider's data processing policies.

4. Security Deployment Checklist

The following is our recommended security deployment steps, listed by priority:

PriorityMeasureImplementation
RequiredSet Gateway Tokenopenclaw doctor --generate-gateway-token
RequiredKeep Local mode (unless remote access is truly needed)openclaw config set gateway.mode local
RequiredUpdate to the latest versionnpm update -g openclaw
Strongly recommendedUse Docker sandboxDeploy with Docker Compose
Strongly recommendedAudit all installed SkillsReview ~/.openclaw/skills/*/skill.md
RecommendedAdd TLS for Remote modeNginx reverse proxy + Let's Encrypt
RecommendedRestrict channel access to owner_onlyopenclaw config set channels.*.dmPolicy owner_only
AdvancedNetwork segmentationDeploy OpenClaw in a dedicated VLAN

5. Special Considerations for Enterprise Environments

If your organization is evaluating OpenClaw for enterprise deployment, the following factors should be included in the security review:[6]

  1. Data classification: Clearly define what data classification levels the agent can access. Confidential data should not exist in a file system reachable by the agent
  2. Compliance requirements: Confirm whether the LLM provider's data processing complies with your industry regulations (GDPR, HIPAA, etc.)
  3. Audit logging: Enable complete operation logging to ensure every agent action is traceable
  4. Access control: Use Group Policy and Allowlist to limit which employees can use the agent
  5. Incident response: Develop a contingency plan for abnormal agent behavior, including emergency stop procedures

Conclusion

OpenClaw is a powerful tool, but power also means risk.[8] Proper security configuration does not limit its capabilities — it ensures those capabilities remain under your control.

The minimum security measures — Gateway Token, Local mode, latest version — take only a few minutes to complete. For more advanced security architecture, we recommend consulting the Docker and reverse proxy sections in our Gateway Deployment Guide, or contacting our consulting team for a security assessment.