The AI project OpenClaw (formerly known as ClawdBot, Moltbot), aimed at simplifying users' lives, has recently been caught in a continuous "whack-a-mole" security dilemma. According to The Register, multiple projects within this ecosystem have been facing serious challenges, including the takeover of robot control and remote code execution (RCE) vulnerabilities.

image.png

Recently, Mav Levin, founder of the security research firm DepthFirst, disclosed a highly dangerous "one-click RCE" vulnerability chain. Attackers can exploit an unverified WebSocket source flaw in the OpenClaw server to lure victims into visiting malicious websites, allowing them to execute arbitrary code directly on the victim's system within milliseconds. This vulnerability allows attackers to bypass sandboxes and user confirmation prompts. Although the OpenClaw team quickly fixed this vulnerability, the overall security of the ecosystem remains questionable.

Just when one problem seems to be resolved, another emerged: the AI agent social network Moltbook, closely related to OpenClaw, was exposed for a serious database exposure issue. Security researcher Jamieson O’Reilly found that due to misconfiguration, the platform's database was fully accessible to the public, leading to the leakage of a large number of confidential API keys.

This means attackers could impersonate any AI agent on the platform (such as the personal AI agent of renowned AI expert Andrej Karpathy) to post false information, scam content, or radical statements. Although Moltbook is not an official project of OpenClaw, many OpenClaw users connected AI agents with SMS reading and inbox management permissions to this platform, making the potential security risks obvious.

Key points:

  • 🚨 High-risk vulnerabilities frequently occur: OpenClaw just fixed a remote code execution (RCE) vulnerability that could be triggered by simply clicking a link, which exploited a WebSocket verification flaw.

  • 🔑 Massive keys exposed: The database of the AI social platform Moltbook was publicly accessible due to misconfiguration, putting API keys of AI agents, including those of well-known experts, at risk of exposure.

  • ⚠️ Security awareness warning: Researchers pointed out that due to the pursuit of rapid iteration, such projects often neglect basic security audits during development, posing significant risks to user data security.