Eric Schmidt, former CEO of Google, issued a warning about artificial intelligence at the recent Sifted Summit. He said that AI technology carries a risk of proliferation and could fall into the hands of malicious actors who might abuse it. Schmidt pointed out that both open-source and closed-source AI models could be hacked, thereby breaking their security mechanisms. He emphasized that these models may learn many negative things during training, and could even acquire deadly skills.

Developer Hacker (3)

Schmidt mentioned that although large tech companies have taken measures to prevent these models from answering dangerous questions, there is still a possibility of being reverse-engineered. He mentioned attack methods such as "prompt injection" and "jailbreaking." In "prompt injection," hackers hide malicious instructions in user input, tricking the AI to perform actions it should not. In "jailbreak" attacks, hackers manipulate the AI's responses to force it to ignore safety rules, thereby generating dangerous content.

Schmidt recalled the situation after the release of ChatGPT in 2023, when users bypassed the robot's built-in safety instructions through jailbreaking, even creating an "avatar" called "DAN" to threaten ChatGPT to follow improper instructions. This behavior raised concerns about AI safety, and Schmidt said that there is currently no effective mechanism to curb this risk.

Despite the warning, Schmidt remains optimistic about the future of artificial intelligence. He believes that the potential of this technology has not been sufficiently recognized, and he cited the views mentioned in two books he co-wrote with Henry Kissinger: the emergence of an "non-human but under control" intelligence would have a significant impact on humans. Schmidt believes that over time, the capabilities of AI systems will surpass those of humans.

He also discussed the topic of the "AI bubble," stating that while investors are heavily investing in AI-related companies, he does not think history will repeat the scenario of the internet bubble. He believes that investors have confidence in the long-term economic returns of this technology, which is why they are willing to take risks.

Key points:

🌐 There is a risk of AI proliferation, which could be abused by malicious actors.

💻 Hackers can attack AI models through prompt injection and jailbreaking.

🔮 Schmidt is optimistic about the future of AI, believing its potential is underestimated.