As global AI technology accelerates from conversational interaction to autonomous "agents," safety and responsibility have become the lifeline that the industry must jointly safeguard.
At the recent Shanghai Pujing AI Academic Annual Conference,
From "being able to speak" to "being able to act": Risks are concentratedly exposed
The white paper points out that the global AI industry is currently undergoing a transition from simple language interaction to complex task execution. However, as agents penetrate various industries, security risks are also emerging prominently.
Self-restraint: Emphasizes that companies should strictly adhere to safety standards, strengthen self-discipline, and ensure that technological development does not exceed boundaries.
Benefiting others: Advocates that technological applications should prioritize social well-being, solving real problems rather than creating new conflicts.
Collaboration: Calls on the entire industry to break down technical islands and address common challenges through shared security strategies.
Safety is the lifeline of social trust
According to
Industry Insight: In the second half of large models, "safety" is the ticket to entry
With the participation of top research institutions such as




