2025 China Internet Conference On-Site Report: At the highly anticipated 2025 China Internet Conference, when asked about potential risks in the practical application of large models, Zhou Hongyi, founder of 360 Group thoroughly analyzed the new challenges that cybersecurity faces in the era of artificial intelligence. He pointed out that compared to traditional IT system vulnerabilities and data privacy leaks, the three major security risks brought by large models should be given high attention.
Core Risk One: "Hallucinations" and Nonsense from Large Models
Zhou Hongyi emphasized that the inherent "hallucination" issue of large models is one of its biggest risks. He explained that when a large model encounters something it does not understand, it will fabricate information seriously. Although this can be laughed off in entertainment scenarios, when large models and their derived intelligent agents begin to deeply enter key areas such as industrial production, manufacturing, and government offices, such "errors" could lead to serious consequences. He specifically pointed out that once an intelligent agent gains the ability to manipulate various tools, the harm and impact of its wrong judgments will be doubled.
Core Risk Two: Large Models Lower the Bar for Attacks, Everyone Can Perform "Injection Attacks"
Zhou Hongyi's second risk is that large models greatly lower the barrier for network attacks. He pointed out that large models allow non-programming professionals to write programs through simple natural language interactions, which also means the barrier to attacking large models is reduced. Through carefully constructed instructions, attackers can induce large models to leak confidential company files, a phenomenon known as "injection attacks." Zhou Hongyi vividly stated that in the future, even a front-line employee without programming knowledge might launch an attack against the company's large model and intelligent agents due to dissatisfaction.
Core Risk Three: Intelligent Upgrade of National-Level Advanced Threat Attacks
From a broader future perspective, Zhou Hongyi pointed out that large models will make nation-state advanced threat attacks more common and complex. In the past, the number of hackers targeting our country was relatively small, but now, hackers are trying to embed their abilities and experience into large models, turning themselves into "hacker intelligent agents." With sufficient computing power support, one hacker can simultaneously control dozens or even hundreds of intelligent agents, completely overturning the traditional network defense situation, transforming cybersecurity from "man vs man" confrontation to "man vs algorithm, man vs machine, man vs computing power," because robot digital hackers only need computing power and don't need rest.
360's Response Strategy: "Counter Algorithms with Algorithms"
Facing these severe challenges, Zhou Hongyi stated that 360 has already begun to take two key measures to respond:
First, 360 is actively building intelligent agent security experts. These experts can help enterprises achieve real-time detection and defense when facing attacks, thus truly achieving "counter algorithms with algorithms."
Second, regarding the security and vulnerability of large models, 360 has developed a specialized "Large Model Guardian."