Recently, the AI company Anthropic made significant changes to the usage policies of its AI coding tool, Claude Code, introducing a weekly usage cap, which has drawn widespread attention and discussion within the user community. This change mainly affects users of the Pro and Max plans of Claude Code, especially those on the advanced Max plan, with the restrictions set to take effect on August 28, 2025. This move is seen as a strategy by Anthropic to address the surge in service demand and resource allocation issues, but the lack of transparency has also caused dissatisfaction among users.

New Limitations Introduced, Affecting Few Users

According to the official announcement from Anthropic, the new weekly usage caps will primarily affect users of the Pro plan (costing $20 per month) and the Max plan ($100 and $200 per month). It is estimated that less than 5% of users will be affected. Specifically, Pro plan users can use the Claude Sonnet4 model for about 40 to 80 hours per week, while Max plan (100 dollars) users can use Sonnet4 for approximately 140 to 280 hours and Opus4 for about 15 to 35 hours; Max plan (200 dollars) users can use Sonnet4 for around 240 to 480 hours and Opus4 for about 24 to 40 hours. These limits may vary based on factors such as the size of the codebase.

image.png

In addition, Anthropic has provided Max plan users with an option to purchase additional usage at standard API rates to meet high-intensity usage needs. However, the sudden implementation of this policy and the calculation of limits in hours rather than traditional tokens have caught many users off guard. Some users have expressed on social media that there is only a 5-hour difference in Opus4 model usage between the $200 and $100 Max plans, raising questions about the cost-effectiveness.

User Concerns: Lack of Transparent Communication

The main frustration from users stems from Anthropic's communication approach. Previously, Anthropic quietly tightened the usage restrictions on Claude Code in mid-July, causing many Max plan users to receive "usage limit reached" notifications in a short period, without clear understanding of the specific limitation criteria. On social media, some users complained that even though their usage was far below the officially stated 900 messages/5-hour limit, the system still restricted access in advance. This opaque adjustment has raised doubts among users about the stability of the service and the company's trustworthiness.

A developer pointed out that the powerful features of Claude Code make it the preferred tool in the programming field, but it seems that Anthropic was not well-prepared for resource management when facing high demand. On social media, some users compared AI subscription services to a "gas station," stating that users cannot clearly understand the actual consumption of "fuel" (inference service), and the vague definition of limits by Anthropic has further increased this distrust.

Anthropic's Response and Future Outlook

Anthropic responded by stating that since the launch of Claude Code, demand has surged, with some users running the tool 24/7, leading to tight system resources. The company claims that the new limits aim to optimize resource allocation, ensure the service experience for most users, and curb violations such as account sharing and reselling. The company also promised to explore other solutions to support long-running use cases in the future, but in the short term, limits will be used to maintain service stability.

Despite this, the user community continues to call for greater transparency from Anthropic, such as clearly publishing the calculation method of limits and providing advance notice of adjustments. Some users stated that if the communication method were improved, the limitations themselves would not have caused such a big controversy.

Industry Insights: The Debate on the Sustainability of AI Services

This incident has also sparked discussions in the industry about the sustainability of AI subscription services. Some believe that any membership plan claiming "unlimited usage" is difficult to sustain in the high-computing-demand AI field. Anthropic's case may serve as a warning to other AI companies: how to balance growing user demand with limited computing resources while maintaining user trust will be key to future development.