Recently, the US tech giant Google has officially signed an agreement with the US Department of Defense, confirming that it will provide its independently developed cutting-edge artificial intelligence large model "Gemini (Gemini)" for military use. According to reports, the core of this collaboration is to allow the military to utilize Gemini's powerful computing power and analytical capabilities in confidential missions.

Although the specific terms of the agreement have not been made public, both parties have reached a clear consensus on the boundaries of cooperation. The agreement emphasizes that the application of Gemini will be strictly limited to legitimate military-related uses. In order to alleviate concerns about potential misuse of the technology, the scope of the cooperation explicitly excludes two sensitive areas: first, the use of the technology for mass surveillance of American citizens is prohibited; second, it is strictly forbidden to apply it to fully autonomous weapon systems (so-called "killing machines").

In fact, Google is not the first tech giant to open its doors to the defense sector. Before this, the US Department of Defense had already reached similar agreements with OpenAI and xAI, a company founded by Elon Musk, showing that generative AI is accelerating its penetration into the core of national security.

Notably, in this wave of AI "militarization," companies have taken different stances. As another major force in Silicon Valley, Anthropic was previously listed on the "supply chain risk list affecting national security" by the US government after clearly refusing to expand its Claude model for military use. This move reflects that, in the current technological environment, leading AI technologies are no longer just the focus of commercial competition, but also key variables in national strategic rivalry.

This cooperation marks a substantive step for Google in balancing technological ethics with government collaboration, and also signals that future defense competition will increasingly rely on the deep involvement of large models.