At the conclusion of the 2025 World Conference on AI Development, AMD and OpenAI jointly launched the new AI chips Instinct MI400 and MI350. This presentation caught the attention of many industry professionals, and OpenAI CEO Sam Altman personally attended the conference to share his experience collaborating with AMD during the chip development process.
Advanced AI Computing Capability
The new AMD GPU Instinct MI350 series, based on the CDNA4 architecture, was specifically designed for modern AI infrastructure. Among them, the MI350X and MI355X GPUs have significantly improved AI computing performance. The MI350 series is equipped with 288 GB of HBM3E memory, with a memory bandwidth reaching up to 8 TB/s. Compared to previous products, the AI computing capability has increased by 4 times, while inference performance has increased by 35 times.
Compared to Nvidia's competing chips, the MI355X chip offers up to 40% more tokens per dollar, making it an excellent choice. The FP4 performance of the MI355X can reach 161 PFLOPS, while the FP16 performance of the MI350X can reach 36.8 PFLOPS, ensuring efficient operation in AI applications.
Flexible Cooling Solutions and Evolved Deployment
Beyond performance, AMD GPUs also offer flexible cooling configurations, suitable for large-scale deployments. For example, an air-cooled rack can support up to 64 GPUs, while a direct liquid cooling environment can support up to 128 GPUs, greatly increasing their application flexibility.
Open Source Software Acceleration Platform ROCm7
To further enhance GPU performance, AMD also launched the open-source software acceleration platform ROCm7. After a year of development, ROCm is now mature and deeply integrated with several globally recognized AI platforms such as LLaMA and DeepSeek. The upcoming ROCm7 version will provide over 3.5 times the inference performance, offering strong technical support to AI developers.
Next-generation AI Chips Instinct MI400
The Instinct MI400 series is AMD's next-generation flagship AI chip, expected to be equipped with 432 GB of high-speed HBM4 memory, with a memory bandwidth reaching up to 300 GB/s. In FP4 precision, the computing performance of the MI400 can reach 40 petaflops, optimized for low-precision computing in AI training. Additionally, the MI400 series uses the UALink technology to enable smooth connection of up to 72 GPUs, forming a unified computing unit and breaking the communication limitations of traditional architectures.
Cooperation Projects with Multiple Companies
Currently, companies like Oracle, Microsoft, Meta, xAI, and others are collaborating with AMD to use its AI chips. Oracle will first adopt solutions powered by the Instinct MI355X chip in its cloud infrastructure. Mahesh Thiagarajan, Oracle's executive responsible for cloud infrastructure, stated that this collaboration has greatly enhanced the scalability and reliability of its services, and he plans to deepen this cooperation in the future.