DeepSeek V3.2 Goes Open Source Under MIT License, Matches GPT-5 Performance
DeepSeek releases V3.2 under MIT license with 671B MoE architecture, matching GPT-5 at one-tenth the cost and achieving gold-medal performance on IMO and IOI competitions.

DeepSeek has released V3.2, the latest iteration of its open-source large language model, and the numbers are hard to ignore. Built on a 671 billion parameter Mixture-of-Experts (MoE) architecture and released under the permissive MIT license, DeepSeek V3.2 matches GPT-5 across a wide range of benchmarks while costing roughly one-tenth as much to run. The model also achieves gold-medal level performance on both the International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI), along with a 96% score on AIME 2025. And then there is V3.2-Speciale, a reasoning-enhanced variant that DeepSeek claims surpasses GPT-5 outright.
The Architecture
DeepSeek V3.2 uses a Mixture-of-Experts architecture, which means that while the model has 671 billion parameters in total, only a fraction of those are active for any given input. This is the key to its cost efficiency. A dense model with 671 billion parameters would be prohibitively expensive to run for most applications, but MoE routing allows the model to activate only the most relevant experts for each query, dramatically reducing compute requirements.
The result is a model that delivers performance comparable to much more expensive dense models at a fraction of the cost. DeepSeek reports that V3.2 costs approximately one-tenth of what it takes to run GPT-5 on equivalent tasks, a difference that is enormous at scale.
This cost advantage is not just about making AI cheaper for large companies. It is about making frontier AI accessible to organizations that could never afford GPT-5-level API costs. Small startups, academic researchers, non-profit organizations, and developers in emerging markets can now access performance that was recently available only to the best-funded players.
Competition-Level Performance
The benchmark results are remarkable. DeepSeek V3.2 achieves gold-medal level performance on both the IMO and the IOI, two of the most prestigious international academic competitions for mathematics and computer science, respectively.
On AIME 2025, the American Invitational Mathematics Examination, the model scores 96%. AIME problems are notoriously difficult, requiring creative mathematical reasoning that goes far beyond rote calculation. A 96% score places the model comfortably in the range of the strongest human competitors.
These results reflect a broader trend in AI development: models are increasingly capable of the kind of deep, multi-step reasoning that was once considered uniquely human. It is no longer surprising when an AI model can solve a calculus problem or write a sorting algorithm. What is surprising is when it can solve novel competition problems that require genuine insight and creativity.
V3.2-Speciale: Surpassing GPT-5
The V3.2-Speciale variant takes things a step further. This is a reasoning-enhanced version of the base model, fine-tuned with techniques that improve its ability to think through complex problems step by step. DeepSeek claims that V3.2-Speciale surpasses GPT-5 on reasoning benchmarks, with performance on par with Google's Gemini 3 Pro.
If these claims hold up under independent evaluation, V3.2-Speciale would represent a significant milestone: an open-source model that definitively outperforms the best proprietary offering from OpenAI. Several independent benchmarking organizations are currently running evaluations, and early results appear to support DeepSeek's claims, at least on reasoning-heavy tasks.
It is worth noting that "surpassing GPT-5" does not mean V3.2-Speciale is better at everything. GPT-5 retains advantages in certain areas, particularly in conversational fluency, instruction following, and some creative tasks. But on the technical benchmarks that matter most for coding, mathematics, and scientific reasoning, DeepSeek appears to have closed or eliminated the gap.
The MIT License
DeepSeek's choice of the MIT license is notable. MIT is one of the most permissive software licenses in existence. It allows anyone to use, copy, modify, merge, publish, distribute, sublicense, and sell copies of the software with essentially no restrictions beyond including the original copyright notice.
For the AI industry, this means that anyone can take DeepSeek V3.2, fine-tune it for their specific needs, and deploy it commercially without paying royalties or negotiating license terms. It is the most developer-friendly licensing approach possible, and it removes virtually all barriers to adoption.
The choice also reflects DeepSeek's strategic positioning. As a Chinese AI company competing in a global market, open source is both a philosophical choice and a business strategy. By making their best model freely available, DeepSeek builds a global community of users and developers who build on their technology, provide feedback, and contribute improvements.
What This Means for the Industry
DeepSeek V3.2 is perhaps the strongest evidence yet that the era of proprietary AI dominance is ending. When an open-source model can match the world's most expensive proprietary model at one-tenth the cost, the value proposition of closed AI becomes increasingly difficult to justify.
This does not mean that OpenAI, Anthropic, and Google are in trouble. These companies offer polished APIs, enterprise support, safety infrastructure, and ecosystem integrations that open-source models cannot easily replicate. But the moat is narrowing, and the pace of open-source improvement shows no signs of slowing down.
For developers and businesses, the practical takeaway is simple: you now have more choices than ever, and those choices are better than ever. Whether you run DeepSeek V3.2 on your own hardware, access it through an inference provider, or use it as a starting point for fine-tuning, you are getting frontier-level AI at a fraction of what it cost just a year ago.
Getting Started
DeepSeek V3.2 and V3.2-Speciale are available for download from Hugging Face and DeepSeek's own model repository. The company provides detailed documentation on deployment, fine-tuning, and optimization, along with pre-quantized versions for different hardware configurations. API access is also available through DeepSeek's inference platform at highly competitive pricing.