OpenAI Chief Financial Officer Sarah Friar recently disclosed the company's latest progress in financial performance and infrastructure development. Data shows that OpenAI is experiencing an unprecedented exponential growth, with its service capacity for customers and available computing resources showing a strong positive correlation.
In terms of computing power layout, OpenAI's data center capacity has made remarkable progress over the past three years. It grew from 0.2GW in 2023 to 0.6GW in 2024, and is expected to reach 1.9GW by 2025, representing a nine-and-a-half-fold increase in overall scale. This expansion of computing power directly drives commercialization, and the company's annual recurring revenue (ARR) has also soared: from $2 billion in 2023 to $6 billion in 2024, and officially surpassed $20 billion in 2025. Sarah Friar emphasized that sufficient computing power is the key engine for rapid customer adoption and profitability.
Aside from large-scale expansion, OpenAI is also striving to optimize the cost-effectiveness of model inference. Currently, its inference cost has dropped below $1 per million tokens. By adopting a tiered strategy of "high-end hardware training and low-cost infrastructure handling high-volume tasks," the company has achieved a balance between performance and efficiency. For future revenue plans, OpenAI does not limit itself to existing product sales but plans to deeply integrate intelligent technology into fields such as scientific research and energy systems, exploring new pricing models based on intellectual property licensing and outcome-oriented approaches.
In terms of product roadmap, AI Agent (artificial intelligence agent) and workflow automation tools are prioritized. The company is committed to helping users achieve task automation across applications and enabling models to handle complex contextual information over longer time periods. This means OpenAI is evolving from a mere model provider to a fully automated intelligent system ecosystem.
Key points:
⚡ Computing Power and Revenue Surge Together: Data center capacity increased 9.5 times to 1.9GW within three years, and the annual recurring revenue (ARR) officially surpassed $20 billion during the same period.
📉 Inference Cost Significantly Reduced: Through flexible infrastructure scheduling, inference costs have been compressed to below $1 per million tokens, greatly improving commercialization efficiency.
🤖 Targeting an Intelligent Agent Ecosystem: The future strategic focus will be on AI Agents and workflow automation, exploring cross-application task processing and deep applications of long context.