Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
French AI lab Mistral is getting into the reasoning AI model game.
On Tuesday morning, Mistral revealed Magistral, its first family of reasoning models. Like other reasoning models â e.g. OpenAIâs o3 and Googleâs Gemini 2.5 Pro â Magistral works through problems step-by-step for improved consistency and reliability across topics such as math and physics.
Magistral comes in two flavors: Magistral Small and Magistral Medium. Magistral Small is 24 billion parameters in size, and is available for download from the AI dev platform Hugging Face under a permissive Apache 2.0 license. (Parameters are the internal components of a model that guide its behavior.) Magistral Medium, a more capable model, is in preview on Mistralâs Le Chat chatbot platform and the companyâs API, as well as third-party partner clouds.
â[Magistral is] suited for a wide range of enterprise use cases, from structured calculations and programmatic logic to decision trees and rule-based systems,â writes Mistral in the blog post. â[The models are] fine-tuned for multi-step logic, improving interpretability and providing a traceable thought process in the userâs language.â
Founded in 2023, Mistral is a frontier model lab building a range of AI-powered services, including the aforementioned Le Chat and mobile apps. Itâs backed by venture investors like General Catalyst, and has raised over âŹ1.1 billion (roughly $1.24 billion) to date.
Despite its formidable resources, Mistral has lagged behind other leading AI labs in certain areas, like developing reasoning models. Magistral doesnât appear to be an especially competitive release, either, judging by Mistralâs own benchmarks.
On GPQA Diamond and AIME, tests that evaluate a modelâs physics, math, and science skills, Magistral Medium underperforms Gemini 2.5 Pro and Anthropicâs Claude Opus 4. Magistral Medium also fails to surpass Gemini 2.5 Pro on a popular programming benchmark, LiveCodeBench.
Perhaps thatâs why Mistral touts Magistralâs other strengths in its blog post. Magistral delivers answers at â10xâ the speed of competitors in Le Chat, Mistral claims, and supports a wide array of languages, including Italian, Arabic, Russian, and Simplified Chinese.
âBuilding on our flagship models, Magistral is designed for research, strategic planning, operational optimization, and data-driven decision making,â the company writes in its post, âwhether executing risk assessment and modelling with multiple factors, or calculating optimal delivery windows under constraints.â
The release of Magistral comes after Mistral debuted a âvibe codingâ client, Mistral Code. A few weeks prior to that, Mistral launched several coding-focused models and rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI agent builder and integrates Mistralâs models with third-party services like Gmail and SharePoint.
Topics
AI Editor
From seed to Series C and beyondâfounders and VCs of all stages are heading to Boston. Be part of the conversation. Save $200+ now and tap into powerful takeaways, peer insights, and game-changing connections.
What makes TechCrunch All Stage different from other startup events? Answers to your most pressing questions
Snap plans to sell lightweight, consumer AR glasses in 2026
Final call: Apply to host a Side Event at TechCrunch All Stage 2025 today
Threads is finally getting a DM inbox
Appleâs upgraded AI models underwhelm on performance
Enterprise AI startup Glean lands a $7.2B valuation
The US Navy says âwelcome aboardâ to new startup partnerships
Š 2025 TechCrunch Media LLC.