As the second half of the AI computing power competition unfolds, China's large AI models are achieving a "surpass" with remarkable application penetration rates. According to the latest data released by OpenRouter, the weekly usage of Chinese AI large models broke records again in the past week (March 9 to March 15), surpassing U.S. models for two consecutive weeks on international mainstream platforms, demonstrating strong explosive power.
Data shows that the weekly usage of Chinese AI large models has surged to 46.9 trillion Tokens, an increase of 11.83% compared to the previous week; in contrast, the weekly usage of U.S. large models declined by 9.33%, reaching 32.94 trillion Tokens. In the top three of the global usage ranking, Chinese AI models have achieved a complete dominance:
MiniMax M2.5: Maintained the top position for five consecutive weeks, with a weekly usage of 17.5 trillion Tokens.
Stephen Star Step3.5Flash: Breaking into the top three for the first time with its fast response and free strategy, it secured the second place, with a weekly usage surge of 79%.
DeepSeek V3.2: Stood in third place with a performance of 10.4 trillion Tokens.
In this computing power frenzy, one model called Hunter Alpha stood out as a mysterious new face. Just launched on March 11, it quickly entered the global seventh place with 6.66 trillion Tokens.
According to the platform data from OpenRouter, Hunter Alpha is a trillion-parameter model specifically designed for intelligent agent (Agent) applications, featuring an ultra-long context capacity of 1 million Tokens. It performs remarkably in long-term planning, complex logical reasoning, and multi-step task execution, especially showing high reliability and instruction execution accuracy when integrated with frameworks such as OpenClaw.
From MiniMax's continuous domination to Stephen Star's sudden rise, and then to the surprising appearance of the mysterious model Hunter Alpha