MiniMax-M2 is a small Mixture of Experts (MoE) model built specifically for maximizing coding and agent workflows. It has a total of 230 billion parameters, with only 10 billion parameters activated. It performs excellently in coding and agent tasks while maintaining strong general intelligence, featuring compactness, speed, and cost-effectiveness.
Natural Language Processing
Transformers