Anthropic is making significant changes to its user data processing practices. The company announced that all Claude users must decide by September 28th whether their conversations can be used for training AI models. Previously, Anthropic had promised not to use consumer chat data for model training and would automatically delete user prompts and conversation outputs after 30 days.

Overview of the Policy Change

This new policy applies to all Claude Free, Pro, and Max users, including those using Claude Code. This means that users' conversations and coding sessions may now be used to train Anthropic's AI systems. Additionally, the data retention period for users who do not opt out will be extended to five years. Notably, Claude Gov, Claude for Work, Claude for Education, or enterprise customers with API access are not affected by this policy, similar to how OpenAI protects its enterprise clients.

Claude

Why is Anthropic Changing Its Policy?

In its blog post, Anthropic stated that this move is to "help us improve model safety, making our system for detecting harmful content more accurate," and help future Claude models "enhance skills such as coding, analysis, and reasoning." In short, the company claims it is for providing a better model.

However, deeper reasons might lie in the urgent need for data. Like all large language model companies, Anthropic needs a large amount of high-quality conversation data to train its AI models to enhance its competitiveness against competitors like OpenAI and Google. Access to millions of Claude user interactions can provide valuable real-world content for Anthropic.

Shifts in Industry Data Policies and User Confusion

Anthropic's policy changes reflect a broader shift in industry data policies. Due to increasing scrutiny on data retention, AI companies are re-evaluating their privacy agreements. For example, OpenAI is currently facing a court order due to lawsuits from publishers like The New York Times, requiring it to retain all consumer ChatGPT conversations indefinitely. OpenAI's Chief Operating Officer, Brad Lightcap, called this an "unnecessary requirement" and believes it conflicts with the company's privacy promises to users.

These changing usage policies have caused great confusion among users. Many users were unaware that the guidelines they had already agreed to were quietly changing. Notably, new users can choose their preferences during registration, while existing users are shown a large "Consumer Terms and Policy Update" window with a prominent "Accept" button, while the toggle for training permissions is smaller and defaults to "On." This design pattern, as noted by The Verge, may lead users to quickly click "Accept" without noticing the data sharing consent option.

Privacy experts have long warned that the complexity of AI makes obtaining effective user consent extremely difficult. The Federal Trade Commission (FTC) has warned that if AI companies "secretly change their service terms or privacy policies, or hide disclosures in hyperlinks, legal jargon, or details," they could face enforcement actions. However, whether the FTC is still monitoring these practices remains an open question.