On April 14, in response to recent media discussions on the safety issues of AI-generated content (AIGC), Liblib AI officially released a statement today, expressing its high regard for the issue and reporting on the progress of internal special inspections and rectifications. The statement acknowledged that in some complex prompt combinations and boundary scenarios where expressions were circumvented, the platform's generation capabilities had at times failed to meet standards.
Currently, Liblib AI has completed the technical fixes for the related issues and has thoroughly blocked all known risk paths. To prevent such problems from happening again, the platform has simultaneously launched a comprehensive upgrade of its review mechanism, enhancing the efficiency of identifying and handling illegal content through strengthened penetration testing. Meanwhile, the company has initiated a responsibility review mechanism internally, aiming to improve the content security system from the source of management and review processes.
In the statement, Liblib AI emphasized that content security is the bottom line for the platform's development. In the future, it will promote the healthy development of the ecosystem with higher standards and sincerely invites society at large to jointly supervise through the reporting email (support@liblib.ai). This incident reflects that AI platforms still need to continuously iterate when dealing with complex content attacks, and also indicates that the domestic AIGC industry will enter a more stringent phase of compliance and self-discipline.





