Recently, OpenAI reached an impressive cooperation agreement with Google Cloud (Google Cloud). OpenAI will use Google Cloud services to gain more computing resources to meet its rapidly growing demand for AI model training and inference. The editorial team of AIbase has compiled the latest information and provided a deep analysis of the meaning and impact behind this cooperation.
Image source note: Image generated by AI, image authorization service provider Midjourney
OpenAI and Google Cloud join forces, breaking through the bottleneck of computing power
According to the latest news, OpenAI and Google Cloud officially finalized a cooperation agreement in May this year, aiming to provide additional computing capacity for OpenAI's AI model training and operation. This cooperation was unexpected because OpenAI and Google have been direct competitors in the AI field for a long time, and Google's DeepMind AI department is directly competing with OpenAI in large language model development. This cooperation marks OpenAI's strategic transformation in acquiring computing power resources, showing its urgent need for diversified sources of computing power.
AIbase learned that previously, OpenAI maintained an exclusive cloud service cooperation with Microsoft Azure, but as ChatGPT and other models became widely used, their demand for computing power far exceeded the supply capacity of a single supplier. Google Cloud, with its powerful infrastructure and self-developed TPU (Tensor Processing Unit) technology, has become an important partner for OpenAI to make up for the computing power gap. This cooperation not only provides stable computing resource support for OpenAI but also adds another point to Google Cloud's market competitiveness in the AI field.
Multisource computing strategy: OpenAI's multi-pronged layout
In recent years, OpenAI has actively sought diversified sources of computing power to reduce dependence on a single supplier. In addition to the cooperation with Google Cloud mentioned above, OpenAI has also cooperated with SoftBank and Oracle earlier this year to advance the "Stargate" data center project worth $50 billion, which is expected to support about three-quarters of OpenAI's computing power needs by 2030. Moreover, OpenAI has signed a computing power agreement worth billions of dollars with CoreWeave and plans to complete the design of its first self-developed chip this year to further reduce dependence on external hardware.
AIbase analyzed that this series of actions reflects the intense competition in the AI industry for computing power. Training and running large language models require massive computing resources, and the tight global supply of GPUs and specialized AI chips is forcing companies to seek multi-party cooperation. OpenAI and Google Cloud's cooperation is not only a technical collaboration but also a strategic choice in the context of computing power shortages.
Google Cloud's victory: A new role in the AI ecosystem
For Google, this cooperation is a major victory for its cloud service business. Google Cloud's sales reached $43 billion in 2024, accounting for 12% of Alphabet's total revenue, and it has attracted numerous AI enterprise customers including Apple, Anthropic, and Safe Superintelligence by providing efficient AI computing resources. AIbase noticed that Google successfully positioned itself as a "neutral computing power provider" for AI startups through its self-developed TPU and advanced AI infrastructure, gaining a unique advantage in its competition with Amazon AWS and Microsoft Azure.
However, this cooperation also brings challenges to Google. Providing computing power to OpenAI means that Google must find a balance between its own chip supply and customer demand. Google's Chief Financial Officer Anat Ashkenazi said in April this year that Google Cloud's computing power supply could no longer fully meet customer needs. This cooperation may further complicate Google's internal resource allocation, especially given the direct competition between its DeepMind AI department and OpenAI.
Industry impact: AI computing power competition reshapes the competitive landscape
OpenAI and Google Cloud's cooperation is not just a strategic adjustment for two enterprises but also reflects profound changes in the dynamics of AI industry competition. AIbase observed that with the explosive growth of generative AI applications like ChatGPT, computing power has become the core bottleneck for AI development. OpenAI estimates that from 2025 to 2030, its computing costs will exceed $320 billion, pushing the industry into a new mode of "enemy-friend alliances." Yesterday's competitors may become today's partners due to the need for computing power.
In addition, this cooperation may have a profound impact on the relationship between Microsoft and OpenAI. Since 2019, Microsoft has cumulatively invested nearly $14 billion in OpenAI and has long been its exclusive cloud service provider. However, as OpenAI gradually shifts toward a multi-cloud strategy, Microsoft's influence may be weakened. AIbase learned that Microsoft and OpenAI are renegotiating the terms of their billions of dollars investment, and the equity ratio of Microsoft in OpenAI may face adjustments in the future.
Future outlook: An era driven by computing power
OpenAI and Google Cloud's cooperation marks the entry of the AI industry into a new phase centered around computing power. AIbase believes that this cooperation will not only accelerate OpenAI's model development and deployment but also promote the coordinated development of the entire AI ecosystem. However, the scarcity and high cost of computing power will remain long-term challenges for the industry. In the future, more companies may collaborate across industries to jointly address computing power bottlenecks, thereby promoting the popularization and innovation of AI technology.