Cybersecurity researchers warn of a new software supply chain attack called "Slopsquatting," which exploits the "package hallucination" phenomenon in generative AI (like LLMs). This occurs when the AI suggests nonexistent software package names during code generation. Attackers can preemptively register these fictitious names and inject malicious code.

Developer, Hacker, Vulnerability, Attack

Image Source Note: Image generated by AI, image licensing provided by Midjourney

The research team found that AI-fabricated package names are often highly credible and repetitive. Approximately 38% of hallucinated package names resemble real ones, with only 13% being simple spelling errors. This makes it easier for developers to adopt them without verification.

Testing 16 code generation models revealed that, on average, 20% of AI-recommended packages were fake. Open-source models showed higher hallucination rates, with DeepSeek and WizardCoder reaching 21.7%, while commercial models like GPT-4 Turbo had a lower rate of 3.59%. CodeLlama performed worst, with over one-third of recommendations being erroneous. This threat is particularly severe in ecosystems like Python and JavaScript, which rely on central package repositories.

Experiments showed high reproducibility in AI's "package hallucination." In 43% of cases, the same hallucination occurred consecutively 10 times, and 58% of hallucinations reappeared across multiple tests. This facilitates precise deployment of malicious packages by attackers.

Researchers attribute this issue primarily to the lack of sufficient security testing in current AI models. While no real-world Slopsquatting attacks have been discovered yet, the technique possesses all the elements of a realistic threat. They urge the development community to exercise caution, scrutinize AI-generated code suggestions, and always verify them to avoid becoming victims of this new type of software supply chain attack.