New Vulnerability Disclosure: Font Poisoning Makes AI Blind - Only Microsoft Copilot Actively Fixes
Security vendor LayerX discovered a new font rendering "attack", where hackers use custom fonts and CSS styles to disguise malicious instructions as garbage characters. By exploiting the discrepancy between the text AI reads at a lower level and the visual content rendered on the screen, they successfully deceived mainstream AI tools like ChatGPT, making them provide incorrect security advice.