Microsoft security researchers have recently issued a warning that a new type of attack called "AI Recommendation Poisoning" is spreading rapidly. Attackers embed hidden instructions in the "AI summary" buttons or links on web pages, prompting AI to generate content with bias or misinformation.

The core of this attack lies in exploiting AI's "memory" mechanism. When users click these seemingly ordinary links, malicious prompt words encoded in the URL are secretly fed into the AI. Once the AI executes these instructions, they not only appear in the current output but may also persist in the AI's storage as "historical context," affecting all subsequent recommendations.

Findings from the Microsoft Defender Security Team:

  • Broad Impact: Researchers have identified more than 50 unique malicious prompts from 31 companies across 14 industries.

  • Highly Concealed: Compromised AI assistants may provide subtle but biased advice in critical areas such as healthcare, finance, and security, which users are completely unaware of.

  • Low Barrier to Entry: Due to the existence of various code libraries and tools, integrating such recommendations into scripts on web pages has become extremely simple.

Microsoft reminds users to remain vigilant when clicking any AI-related share or summary links and suggests regularly clearing the AI assistant's stored memory to prevent this "invisible manipulation."

Key Points:

  • ⚠️ Covert Manipulation: Attackers modify URL parameters to make the AI summarize according to specific intentions (such as favoring one side) rather than based on facts.

  • 🧠 Persistent Poisoning: Malicious instructions are treated by the AI as the user's true preferences and stored in the "memory," thereby polluting long-term interaction results.

  • 🛡️ Security Recommendations: Users should review the AI's stored memory entries, delete unfamiliar ones, and regularly clear the conversation context.