According to a recent survey, 62% of cybersecurity officers stated that their employees were attacked using artificial intelligence (AI) in the past year. These attacks include prompt injection attacks and forged audio or video content. The survey found that deepfake audio phone attacks are the most common method, with 44% of companies reporting at least one such incident, and 6% of these incidents causing business disruptions, financial losses, or intellectual property losses.

Cybersecurity, Privacy

Image source note: The image is AI-generated, and the image licensing service is Midjourney

In cases where audio screening services are used, the percentage of losses drops to 2%. Video deepfake cases are slightly less common, with 36% of companies experiencing them, but 5% of these incidents also caused serious problems.

Chester Wisniewski, Global Chief Information Security Officer at security company Sophos, pointed out that deepfake audio technology has become very mature and low-cost. He mentioned, "Audio forgery can now be generated in real-time. If it's the voice of your spouse, they may notice, but if it's a colleague you only occasionally communicate with, it can be forged in real-time almost without any obstacles, which poses a big challenge."

He believes that the reported numbers for audio deepfakes may be underestimated, but the proportion of video fakes is higher than expected. He noted that real-time forging of videos of specific individuals is very expensive, potentially costing millions of dollars. However, Sophos has already seen some scammers briefly displaying deepfake videos of a CEO or CFO in WhatsApp calls, then claiming there was a network issue, deleting the video, and switching to text communication for social engineering attacks.

Additionally, cases of using regular video deepfakes to hide someone's identity are more common. For example, North Korea uses AI deepfake technology to provide services to Western companies, earning millions of dollars, and this technology is quite deceptive even for professionals.

Furthermore, another rising AI-generated attack is prompt injection attacks, where attackers embed malicious instructions into the content processed by AI systems, tricking them into leaking sensitive information or misusing connected tools. This could lead to code execution, especially when integrated. According to a Gartner survey, 32% of respondents said their applications had been subjected to prompt injection attacks.

Key points:

📞 62% of companies have experienced AI attacks, with 44% reporting deepfake audio phone incidents.

💻 Real-time audio forgery technology is mature, posing new challenges for employee security.

🔍 Prompt injection attacks are frequent, affecting 32% of enterprise applications.