In the rapidly evolving field of artificial intelligence, OpenAI recently announced an investment of up to $5 billion to enhance computing resources, a revelation made by OpenAI CEO Greg Brockman in a related legal case. This substantial investment is expected to be realized by 2026, highlighting the surge in demand for computing power in training and inference of large AI models.

Data shows that OpenAI's spending on computing power has grown thousands of times compared to around $30 million in 2017. This shift marks the transition of generative AI from an early experimental phase to large-scale commercialization, with a rapid increase in demand. Today, the operation of products like ChatGPT, model training, and enterprise API services all rely on large GPU clusters and cloud computing infrastructure.
Industry experts point out that this $5 billion investment includes not only model training costs but also the cost of daily inference for global users and ongoing investments in larger-scale models. As the number of users continues to grow, AI companies' "computing bills" are also rising in tandem.
More notably, OpenAI also revealed its long-term development goals: by 2030, cumulative computing investments could reach as high as $60 billion. This means that in the coming years, competition in the AI industry will shift from algorithm capabilities to control over computing resources and infrastructure.
At the same time, the entire industry is entering a "computing arms race," with tech giants such as Microsoft, Google, and Amazon expanding data centers on a large scale and competing to secure GPU supplies to maintain a leading position in the next generation of AI competition. Analysts believe this trend will further increase global investment in AI infrastructure, bringing new development opportunities for the industry.


