French startup Mistral is once again making headlines as they collaborate with the open-source team All Hands AI to launch a brand-new language model called Devstral. This model not only boasts 24 billion parameters but also requires significantly less computational resources than many of its counterparts, making it an ideal choice for local deployment and device-side usage. Users with an RTX4090 graphics card or 32GB of memory can easily run Devstral, offering a more flexible user experience.
In the growing reputation of open-source communities, Mistral has demonstrated its capabilities through Devstral, proving itself to developers. Despite criticism over the non-open-sourcing of their Medium3 large model in the past, this new openness is heartening. Devstral follows a permissive Apache2.0 license, allowing developers and organizations to freely modify, deploy, and commercialize it, bringing new possibilities to numerous projects.
Image Source Note: Image generated by AI, image licensed from Midjourney service provider
The primary aim of Devstral's design is to address real-world software engineering challenges. While many large language models perform well on programming tasks such as writing independent functions or code completion, they often struggle with context-related issues in complex codebases. Devstral focuses on this area, effectively solving real GitHub problems and being compatible with code intelligence frameworks like OpenHands and SWE-Agent.
According to top software engineering benchmark SWE-Bench Verified, Devstral performed impressively, scoring 46.8%, far surpassing other open-source models and even leading some closed-source models like GPT-4.1-mini by a margin of 20 percentage points. This demonstrates Devstral’s potential in practical programming capabilities.
Under the same testing framework, Devstral outperformed many larger models, such as Deepseek-V3-0324 and Qwen3232B-A22B. Its notable efficiency and superior performance have earned praise from developers.
In addition, Devstral can be accessed via Mistral’s Le Platforme API at a cost of $0.10 per million input tokens and $0.30 per million output tokens, offering excellent value for money.