At this sensitive moment where AI ethics and national security intersect, NVIDIA CEO Jensen Huang spoke out publicly at the GTC 2026 technology conference on Thursday, urging tech industry leaders to exercise restraint when discussing AI risks. Huang emphasized that while it is necessary to highlight the potential of technology, deliberately creating panic could be counterproductive and even hinder national competitiveness.
This statement comes amid an escalating conflict between AI startup giant Anthropic and the U.S. government. As the developer of the Claude chatbot, Anthropic broke off relations with the Pentagon after insisting on including clauses in its contract that prohibit the use of its AI tools for domestic surveillance and fully autonomous weapon systems. The Trump administration then classified Anthropic as a "supply chain risk" and planned to terminate all its projects within the government.
"It's software, not a living being"
Regarding the "AI threat theory" emerging in public discourse, Huang provided a calm assessment. He bluntly stated, "AI is neither a living organism nor an alien being; it has no consciousness and is essentially computer software." He warned that extreme, catastrophic, and evidence-free statements could cause actual harm far beyond people's imagination.
Optimistic Outlook: Revenue May Reach Trillions of Dollars
Despite Anthropic's current legal battles and suppression crisis, Huang expressed high expectations for its financial prospects. He predicted that by 2030, Anthropic's revenue could exceed $1 trillion. He believed that the company's CEO Dario Amodei's prediction for his company was somewhat conservative.
Strategic Considerations for Supply Chain Diversification
In addition to industry disputes, Huang also discussed global chip manufacturing risk management. He reiterated that AI supply chains must achieve diversified layouts. To reduce the potential threats of excessive concentration, NVIDIA is actively advancing production arrangements in South Korea, Japan, and the U.S. itself to ensure the supply resilience of this "strategic material."
In Huang's view, the greatest risk the U.S. faces in the AI field is not the technology itself, but the stagnation of technology adoption caused by excessive anger or paranoia. This struggle over safety, ethics, and national interests is becoming a key force reshaping the AI industry landscape.


