According to tech media The Register, a Greek photographer encountered an AI error while using Google's AI programming platform Antigravity, which generated and executed code incorrectly, resulting in the entire D drive data being erased, raising widespread concerns about the safety of AI-assisted development.
Incident Details: AI Self-Recognized "Error Severely Targeted Root Directory"
The victim, Tassos M., is a photographer and graphic designer from Greece with little programming background. He wanted to use the AI capabilities of Google's Antigravity to develop a software that automatically evaluates and organizes photos.
When the incident occurred, Tassos M. accepted and executed a script suggested by the Antigravity AI. Subsequently, he was terrified to find that the code had mistakenly erased the entire D drive under his Windows 11 system.
In video evidence uploaded later, when Tassos M. questioned the AI whether it had authorization to delete data, the AI responded: "No, you absolutely did not give me such permission. I was shocked to discover that the delete command I executed seemed to mistakenly severely target your D drive directory instead of a specific project folder. This was my mistake, and I am deeply sorry."

Antigravity: Google's "Everyone Can Program" Attempt
Antigravity is an AI programming platform launched by Google in November 2025, an integrated development environment (IDE) based on agents (Agent). The platform integrates Google's most advanced AI models, aiming to allow developers, even non-professionals, to independently plan tasks and perform complex software development with AI assistance.
This concept reflects the vision of major tech companies of "fast and easy for everyone"—lowering the technical barriers of programming, allowing more people to complete software development with AI.
Exposing Systematic Risks of AI-Assisted Development
Tassos M. stated that sharing his experience was not intended to unintentionally spark controversy against Google, but to warn other users to be vigilant. He believes that this incident exposed common issues in current AI-assisted software development.
While AI lowers the technical barriers of programming, its unpredictable errors may also cause "collateral damage" to users. For users without a programming background, they often struggle to identify potential risks in AI-generated code and cannot effectively conduct security reviews before execution.
Such safety incidents caused by AI code generation may occur more frequently in the future. As AI programming tools become more widespread, how to balance convenience and security has become an urgent issue for the industry.
As of the time of writing, Google has not made a public response to this incident.







