Recently, Meta announced that it has fixed a critical security vulnerability that affected its AI chatbot, which previously allowed users to access other users' private prompts and AI-generated content. The discoverer of the vulnerability, Sandeep Hodkasia, founder of the security testing company AppSecure, received a $10,000 reward from Meta for privately disclosing the vulnerability on December 26, 2024.
Hodkasia said in an interview with TechCrunch that he discovered the vulnerability while conducting an in-depth study of Meta AI's features. The system allows users to edit their AI prompts and generate text and images, but during this process, Meta's backend server assigned a unique number to each prompt and its generated response. By analyzing the network traffic in his browser while editing prompts, Hodkasia found that he could easily modify this unique number, causing Meta's servers to return other users' prompts and generated content.
This vulnerability means that Meta's servers did not properly verify the user's identity when requesting prompts and responses. Hodkasia pointed out that these prompt numbers are "easy to guess," and malicious attackers could use automated tools to quickly change the numbers and scrape other users' original prompts.
Meta confirmed that the vulnerability was fixed on January 24, 2025, and stated that the company "found no evidence of abuse and also rewarded the researcher." Ryan Daniels, a Meta spokesperson, mentioned in an interview that as major technology companies continue to launch and improve their own AI products, security and privacy risks have become increasingly prominent.
The independent Meta AI application was released earlier this year, aiming to compete with competitors such as ChatGPT. However, the application encountered some issues at launch, with some users mistakenly sharing private conversations with the chatbot publicly.