【AIbase Report】On October 27, 2025, the Chinese AI startup MiniMax announced the open source release of its latest large language model MiniMax M2. The model, with breakthrough efficiency and outstanding performance, is specifically designed for agent workflows and end-to-end coding tasks. Its cost per token is only 8% of that of Anthropic Claude Sonnet, and its speed is about twice as fast, aiming to provide developers and enterprises with a cost-effective AI solution.

MoE Architecture: The Perfect Combination of Efficiency and Performance
MiniMax M2 is a small model "designed to maximize coding and agent workflows", with a total parameter count of 230 billion, but only activates 10 billion parameters during inference, achieving low computational costs through an efficient Mixture of Experts (MoE) architecture. The model supports a 204,800 token context window and a maximum output capacity of 131,072 tokens, enabling it to handle complex long-term tasks robustly.
Key Features: Extreme Optimization for Coding and Agents
Advanced Coding Capabilities: M2 is optimized for developer workflows, excelling in code generation, multi-file editing, compile-run-fix loops, and testing verification. It seamlessly integrates with mainstream tools like Claude Code and Cursor, supporting end-to-end development needs.
High Agent Performance: The model can reliably handle long toolchains such as multi-cloud platforms (MCP), Shell commands, browser interactions, and code execution. In evaluations like BrowseComp, M2 performs exceptionally well in complex information retrieval, maintaining traceable evidence, and recovering from intermittent failures.
Benchmark Tests: First Among Open Source Models
According to benchmark test results from the independent organization Artificial Analysis, MiniMax M2 ranks first globally among open source models in comprehensive intelligence metrics such as mathematics, science, instruction following, coding, and agent tool usage. Its performance in mathematics, coding, and agent tasks even surpasses closed-source models like Claude3Opus, while maintaining low latency and high concurrency, making it suitable for real-time applications.
Open Sharing: Apache 2.0 License and Free Access
MiniMax M2 is open-sourced under the Apache 2.0 License, encouraging global developers to use it for commercial purposes and unrestricted fine-tuning. Currently, M2 offers free access worldwide for a limited time through the MiniMax agent platform and API. The model weights have been uploaded to Hugging Face, supporting local deployment by developers. Community feedback indicates that M2 has better factual reliability in sensitive queries than some closed-source models, making it suitable for scenarios with high security requirements.
Address: https://huggingface.co/MiniMaxAI/MiniMax-M2
https://platform.minimax.io/docs/guides/text-generation





