MiniMax M2.5-HighSpeed: 3 Times Faster Inference Speed, Empowering AI Applications
After the release of the MiniMax M2.5 model, it was quickly integrated into over 50 platforms, and the M2.5-highspeed model was launched, with an inference speed of 100 TPS, three times that of similar products. At the same time, three types of Coding Plan packages were released, and users can enjoy a 90% discount by inviting friends, continuously improving AI service efficiency.