]On April 7, the 360 Vulnerability Mining Intelligent Agent announced that it successfully discovered and reported three high-value security vulnerabilities in the AI agent OpenClaw, including one high-severity vulnerability and two medium-severity vulnerabilities. All the relevant vulnerabilities have been officially fixed and publicly disclosed. This achievement marks a breakthrough for AI agents in automated security audits, transitioning from traditional rule-driven approaches to intelligent thinking-driven methods, and provides key technical support for the security governance of AI-native applications.

The high-severity vulnerability discovered this time focuses on the approval and execution mechanism of local scripts. Attackers can achieve unauthorized code execution by tampering with script content that has already passed approval, thereby gaining control of user devices. The two medium-severity vulnerabilities involve the reuse of security verification parameters in the OAuth manual authorization process, as well as resource management defects in the processing of WebSocket data during voice calls. The former could lead to the compromise of users' Google service account permissions, while the latter might cause system resource exhaustion leading to device crashes. These vulnerabilities directly target the core operational mechanisms of AI agents, exposing deep-seated risks in current agents' permission isolation and protocol implementation.

According to 360, the vulnerability mining intelligent agent system has cumulatively identified high-value vulnerabilities in multiple mainstream AI agents. Compared to traditional scanning tools, this system is able to simulate the attack-defense intuition of security experts, achieving automation in vulnerability detection, verification, and reproduction, thus freeing up human resources to focus on more creative risk assessment areas. As AI agents gradually integrate into user workflows, AI-driven automated vulnerability discovery technology will become a critical infrastructure for ensuring the security of the AI industry chain's underlying layers, promoting the development of a more resilient security defense system in the industry.