French AI lab Mistral has recently announced the official launch of its first series of inference models — Magistral. The series includes two versions, Magistral Small and Magistral Medium, aimed at enhancing logical reasoning capabilities in disciplines such as mathematics and physics. Mistral stated that the Magistral series models solve problems step by step to improve result consistency and reliability.

image.png

The Magistral Small version has 24 billion parameters and is available for free download on the AI development platform Hugging Face under the Apache2.0 license. In comparison, the Magistral Medium version has stronger capabilities and is currently in preview mode, accessible only through Mistral's Le Chat chat platform, company APIs, and partner cloud platforms.

In its official blog, Mistral pointed out that the Magistral models are suitable for various enterprise-level applications, including structured calculations, programmatic logic, decision trees, and rule-based systems. These models have undergone multi-step logical fine-tuning to provide higher explainability and display traceable thought processes in the user's language.

image.png

Mistral was founded in 2023 and is an advanced model laboratory dedicated to developing a series of AI-driven services, including Le Chat and mobile applications. The company has received support from venture capital institutions such as General Catalyst, raising more than 1.1 billion euros (approximately 9.022 billion RMB). However, in terms of inference model development, Mistral still lags behind some leading AI laboratories.

According to Mistral's own benchmark tests, the competitiveness of Magistral seems less prominent. In the GPQA Diamond and AIME tests evaluating models' skills in physics, mathematics, and science, Magistral Medium performed worse than Gemini2.5Pro and Claude Opus4. Moreover, in the LiveCodeBench programming benchmark test, Magistral Medium also failed to surpass Gemini2.5Pro.

Nevertheless, Mistral emphasized Magistral's advantages in speed and multilingual support. Mistral claimed that on the Le Chat platform, Magistral's answering speed is ten times faster than competitors and supports multiple languages such as Italian, Arabic, Russian, and Simplified Chinese. Mistral stated in its blog that Magistral is designed for research, strategic planning, operational optimization, and data-driven decision-making, capable of performing tasks such as multi-factor risk assessment, modeling, and calculating optimal delivery windows under constraints.

The release of Magistral comes shortly after Mistral launched its "atmosphere programming" client Mistral Code. Additionally, a few weeks ago, Mistral released several models focused on programming and launched Le Chat Enterprise, a business-oriented chat service offering tools like AI agent builders and integrating Mistral's models with third-party services such as Gmail and SharePoint.

Official Blog: https://mistral.ai/news/magistral

Key Points:

📊 Mistral has launched the inference model series Magistral, including Small and Medium versions.  

🚀 Magistral Small is already available for download on Hugging Face, while the Medium version is currently in preview mode.  

🌍 The models support multiple languages, have answering speeds ten times faster than competitors, and are applicable to various enterprise scenarios.