Qualcomm has officially launched its next-generation artificial intelligence chips — AI200 and AI250, aiming to challenge the market leader NVIDIA. This launch has attracted widespread attention, and the company's stock price rose by more than 20%.
The Qualcomm AI200 chip is a solution designed for rack-scale AI inference, with the goal of reducing total cost of ownership (TCO) and improving performance. The chip supports 768GB of LPDDR memory, providing strong support for the inference of large language models (LLMs) and multimodal models (LMMs) with such memory capacity and lower costs.

In contrast, the Qualcomm AI250 employs an innovative near-memory computing architecture, which enables it to provide over ten times the memory bandwidth, significantly reducing power consumption and improving the efficiency and performance of AI inference tasks. Both chips are equipped with direct liquid cooling technology to enhance heat dissipation.
Outside of hardware, Qualcomm also introduced a comprehensive AI software stack covering all levels from application to system software, optimized for AI inference. Developers can easily deploy and manage models through Qualcomm's Efficient Transformers Library and AI inference suite. This software stack supports numerous mainstream machine learning frameworks and inference engines, providing rich tools, libraries, and APIs for AI application development.
It is expected that Qualcomm's AI200 and AI250 will be commercially available in 2026 and 2027, respectively. This release not only demonstrates Qualcomm's ambition in the AI field but also attracted significant market attention.
Key Points:
🌟 Qualcomm released AI200 and AI250 chips, challenging the market leader NVIDIA.
📈 Qualcomm's stock price surged by more than 20% due to the new chip release.
🖥️ New chips use liquid cooling technology and support various AI inference applications and tools.


