Google's CEO Sundar Pichai announced at a recent Google Cloud event that the number of tokens processed by Google's AI products and interfaces per month has exceeded 1.3 trillion. This figure surpasses the previous milestone of 980 billion tokens announced by Google in June, representing an increase of about 320 billion tokens since then.

The Truth Behind the Token Surge: A Measure of Computational Complexity

Although 1.3 trillion tokens sound like a significant increase in user activity, this metric actually reflects the rapid rise in computational complexity on the backend of AI models, rather than a direct measure of user usage or practical value.

Analysts suggest that the sharp increase in token consumption is likely driven by the introduction of new inference models such as Gemini 2.5 Flash. These models perform more internal computations per request. For example, one analysis showed that Gemini Flash 2.5 uses about 17 times more tokens per request compared to its previous version, leading to a 150-fold increase in the cost of inference tasks. Additionally, complex multimodal features such as video, image, and audio processing may also contribute to the total token count.

Therefore, this number is seen as an indicator of Google's backend computational load and infrastructure expansion, rather than a direct measure of user activity or actual benefits.

Google (3)

Environmental Claims Under Scrutiny: Mismatch Between Computational Scale and Reports

The massive token consumption reported by Google has also intensified criticism of its environmental impact reports.

Previously released environmental reports from Google stated that a typical Gemini text prompt consumes only 0.24 watt-hours of electricity, 0.03 grams of carbon dioxide, and 0.26 milliliters of water, claiming it consumes less than 9 seconds of TV playback time. However, critics argue that these data only measure the smallest unit of computation and may be based on lightweight language models with lower resource consumption.

This evaluation ignores the true scale of AI operations, especially heavier use cases such as document analysis, multimodal generation, or agent-driven web searches, as well as the massive computational demand represented by 1.3 trillion tokens across the entire system. The latest token statistics from Google contrast sharply with the small impact described in its official environmental assessment, highlighting the mismatch between the rapid growth of Google's computational needs and its environmental claims.