Recently, the academic community was shocked by a heartbreaking "digital tragedy." Professor Marcel Bucher from the University of Cologne discovered that his two-year research results stored on ChatGPT—including project applications, paper revisions, lecture notes, and exam materials—were suddenly lost.
The incident originated from a simple setting adjustment. In a column in Nature, Professor Bucher recounted that he was curious whether he could still use the model's functions if he turned off OpenAI's "Data Consent" option. However, after turning it off, all conversation records disappeared immediately without any warning. Since the platform did not offer an "undo" or "restore" button, he was left with only a cold blank page.
This incident sparked a huge controversy on social media. Many netizens expressed confusion about the professor's failure to perform local backups for two years, and some even believed that such excessive reliance on AI tools undermines academic rigor. In response, Professor Bucher argued that he was aware that AI might have factual errors, but he had always trusted the stability and continuity of this workspace, using it as a daily research assistant.
In response to this incident, OpenAI stated in a reply to Nature magazine that the claim of "no warning" was incorrect, saying that a confirmation prompt would appear before users permanently deleted conversations, and emphasizing that once conversations were deleted, they could not be recovered. At the same time, the official reminded professional users again: be sure to make personal backups for important work and do not regard cloud-based AI platforms as the only storage repository.