March 5, 2026, the U.S. Department of Defense officially classified the AI unicorn Anthropic and its products as a "supply chain risk," an unprecedented ruling that marks the complete escalation of tensions between top AI laboratories and government military demands.
According to reports by Bloomberg citing senior defense officials, this classification stemmed from weeks of confrontation: Anthropic's CEO Dario Amodei explicitly refused to allow the military to use its AI system for mass surveillance of American citizens or to power fully autonomous weapons without human intervention; meanwhile, the Pentagon insisted that private contractors should not restrict the military's use of cutting-edge technologies.

This designation has had a significant impact on the industry landscape, as the "supply chain risk" label was previously only applied to foreign hostile entities. As a result, any institution collaborating with the Pentagon must now prove it has not used the Claude series models. Currently, U.S. military operations in the Middle East, including Iran, heavily rely on the Claude model integrated into the Palantir Maven system for operational data management; the ban could lead to a reconfiguration of the existing military intelligence system.
Unlike Anthropic's firm stance, OpenAI has reached an agreement with the Department of Defense, allowing the military to use its system for "all legitimate purposes." Recently, OpenAI's president Greg Brockman donated $25 million to related political organizations. In response, Amodei criticized the Department of Defense's actions as "retaliatory" and hinted that this was related to his refusal to support specific political positions.
Currently, hundreds of employees at OpenAI and Google have launched protests, calling for the reversal of this designation. This incident not only highlights the difficult choices AI companies face between military ethics and commercial access, but also signals that the governance of cutting-edge AI is entering a new era of strong regulation dominated by government will, posing serious challenges to the autonomy of technology companies.