Global's First Pure AMD-Trained MoE Large Model ZAYA1 Launch: 14T Tokens + CCA Attention Performance Comparable to Qwen3
AMD, IBM, and Zyphra launch ZAYA1, the first MoE model trained entirely on AMD hardware. Pretrained on 14T tokens, it matches Qwen3 series performance with strong math reasoning. Uses 128 nodes × 8 MI300X GPUs (750 PFLOPs), CCA attention mechanism, and curriculum learning. Optimized versions to follow.....