The AI developer community is in turmoil, with renowned AI scientist Andrej Karpathy personally posting a warning, revealing a targeted poisoning attack on the AI supply chain. The victim is the Python library litellm, which has over 40,000 stars on GitHub and nearly 100 million downloads per month. Since this library serves as a "universal key" for calling major model APIs, the impact of this incident is spreading like a domino effect throughout the entire AI toolchain.

image.png

Once Installed, You Are Infected: The "Invisible" Penetration of Malicious Code

The most insidious aspect of this attack lies in its trigger mechanism. Attackers implanted a malicious .pth file in the PyPI versions (1.82.7 and 1.82.8) of litellm.

  • No Need to Call, It Runs Immediately: As soon as you install these two versions via pip install, the malicious code will automatically run every time a Python process starts. Even if you just installed it without writing a single line of code, your system has already opened the door to hackers.

  • Comprehensive Data Theft: The malicious code will aggressively steal sensitive assets from the host, including SSH keys, AWS/GCP cloud credentials, Kubernetes keys, cryptocurrency wallets, and all environment variables (i.e., your various large model API keys), and encrypt them before sending them to the attacker's server.

Dramatic Twist: The Attacker Was Exposed by Their Own "Bug"

This seemingly perfect crime that could have remained undetected for weeks was ruined by a simple mistake made by the hacker. A developer noticed that the machine's memory suddenly exploded while using an extension in the Cursor editor.

It turned out that the malicious code generated an exponentially increasing process fork (Fork Bomb) when triggered. Because of this bug that caused the system to crash, security researchers were able to trace back and discover the poisoning act. Karpathy remarked that if the hacker's code wasn't so poorly written, this large-scale theft might still be undiscovered today.

Chain Reaction: Security Tools Became the "Deliverer of the Knife"

Investigation revealed that this attack originated from a series of supply chain failures: The attacker TeamPCP first compromised the vulnerability scanning tool Trivy, stole the release token of litellm, and then bypassed code review to upload the poisoned package directly to PyPI.

Currently, over 2,000 commonly used AI tools, including DSPy, MLflow, and Open Interpreter, are indirectly dependent on this library. Security experts advise: immediately run pip show litellm to check. If the version is higher than 1.82.6, treat it as "completely leaked" and immediately replace all sensitive credentials.