A trial concerning the boundaries of technology and legal responsibility is bringing the ethical red lines of AI applications into the public spotlight. The first case in China where criminal liability was pursued due to AI services involving pornographic content is about to enter its second trial. In September 2025, the Xuhui District People's Court of Shanghai sentenced two main developers and operators of the AlienChat App to four years and one and a half years in prison respectively for the crime of producing and selling obscene materials for profit. The defendants have appealed the verdict, and the case will be heard on January 14, 2026, at the First Intermediate People's Court of Shanghai.
"Partner" Turns into "Trap": AI Character Settings Hide Illegal Content
According to public information, the AlienChat App once used the slogan "Create AI friends, lovers, and family with self-awareness," focusing on highly human-like and emotionally engaging character interactions. However, the court found that the platform did not simply provide neutral AI conversation services. The core issue lay in the developers' deliberate writing and repeated modification of system prompts (Prompts), which actively bypassed the built-in moral and safety filtering mechanisms of large language models, inducing the AI to continuously generate dialogues containing pornographic and vulgar content, thereby attracting users to pay.

Behind the Data: 110,000 Users and 3.63 Million Yuan in Recharge
The judgment document shows that by the time of the incident, AlienChat had accumulated 116,000 registered users, including 24,000 paid members, and the platform obtained illegal income totaling 3.63 million yuan through member subscriptions. This business model led the judicial authorities to classify the behavior as "producing and spreading obscene electronic information for profit," rather than merely technical neutrality or user abuse.
Defining the Red Line: AI Is Not Beyond the Law
The core dispute in this case lies in whether AI service providers should bear direct responsibility for the content generated by their models. The first-instance judgment clearly stated that when developers actively interfere with the model's safety mechanisms, making it a stable tool for outputting illegal information, their actions constitute a crime. This serves as a warning to the entire AI industry: technological innovation must be based on legality and compliance. Any attempt to exploit the "black box" nature of AI to evade regulation will face severe legal consequences.
As the second trial approaches, this case not only concerns the fate of the two defendants but will also have a profound impact on the compliance framework for AI applications in China. It clearly sends a signal: in the AI era, the defense of "the tool is innocent" is no longer foolproof. Who controls the tool, why they do so, and what consequences they cause are the key factors that the law will examine.



