The public feud between global AI giants OpenAI and Anthropic has escalated again. Sam Altman, CEO of OpenAI, recently questioned the latest safety model of its competitor on a podcast.
He believes that Anthropic is exploiting the public's fear of technology to exaggerate the actual capabilities of its products. In Altman's view, this strategy is not truly about safety but rather a business tactic.

Limiting access for elites, accused of creating technological barriers
The controversy stems from Anthropic's release of the Mythos model this month, which is currently only available to a select group of enterprise clients. The company explained that due to the model's powerful capabilities, it has chosen not to make it publicly available to prevent cybercrime.
Altman countered that this approach is essentially an attempt to keep artificial intelligence technology in the hands of a few elites. He humorously compared this marketing strategy to first creating panic and then selling expensive shelters to those feeling threatened.
An industry-wide marketing issue, exaggerated promotion causing concerns
In fact, this practice of emphasizing "technological dangers" to indirectly prove "technological strength" is not uncommon in the AI industry. Many professionals use exaggerated language to attract attention and gain an advantage in the competitive market.
Although Altman himself has often spoken about the risks that AI could bring, his criticism this time clearly targets deeper issues of industry monopolization. This debate about technological transparency and public interest has raised the discussion on the safety boundaries of artificial intelligence to a new level.


