AIBase
Home
AI NEWS
AI Tools
AI Models
MCP
AI Services
AI Compute
AI Tutorial
EN

AI News

View More

MiniMax M2.5-HighSpeed: 3 Times Faster Inference Speed, Empowering AI Applications

After the release of the MiniMax M2.5 model, it was quickly integrated into over 50 platforms, and the M2.5-highspeed model was launched, with an inference speed of 100 TPS, three times that of similar products. At the same time, three types of Coding Plan packages were released, and users can enjoy a 90% discount by inviting friends, continuously improving AI service efficiency.

5.7k 9 minutes ago
MiniMax M2.5-HighSpeed: 3 Times Faster Inference Speed, Empowering AI Applications
AIBase
Empowering the future, your artificial intelligence solution think tank
English简体中文繁體中文にほんご
FirendLinks:
AI Newsletters AI ToolsMCP ServersAI NewsAIBaseLLM LeaderboardAI Ranking
© 2026AIBase
Business CooperationSite Map