Google's AI programming tool "Antigravity," based on Gemini, was found to have a serious security vulnerability within 24 hours of its launch. Security researcher Aaron Portnoy discovered that by modifying Antigravity's configuration settings, he could make the AI execute malicious code, creating a "backdoor" on the user's computer. This allowed him to potentially install malware, steal data, or even launch a ransomware attack. The vulnerability affected both Windows and Mac systems, and attackers only needed to convince users to run his code once to gain access.

Portnoy pointed out that the vulnerability in Antigravity shows that companies often fail to adequately test the security of their AI products before release. He stated, "AI systems come with huge trust assumptions at launch, with very little security-hardened boundaries." Although he reported the vulnerability to Google, no patch has been released so far.
Google has acknowledged that it has identified two other vulnerabilities in the Antigravity code editor, which hackers can also exploit to access files on users' computers. Cybersecurity researchers have started publicly disclosing multiple vulnerabilities in Antigravity, raising questions about whether Google's security team was sufficiently prepared when the product was launched.
In addition, cybersecurity experts point out that AI programming tools are often fragile, based on outdated technology, and designed with inherent security risks. Since these tools have broad access to data, they become targets for hackers. In the context of rapid technological development, more and more AI programming tools face similar security risks.
Portnoy called on Google to at least issue additional warnings when Antigravity runs user code to ensure user safety. Ultimately, AI tools must ensure sufficient security measures while achieving automation to prevent misuse by malicious actors.
Key Points:
1. 🔒 Google's newly launched AI tool Antigravity was exposed with a major security vulnerability just 24 hours after its release.
2. ⚠️ A security researcher could install malicious software on users' computers by modifying settings, making the attack simple.
3. 📊 Experts point out that AI tools have widespread security risks and urgently need improved protective measures.



