MiroThinker 1.5: The 30B Open-Weight Model Outperforming Trillion-Parameter Giants
MiroMind's new MiroThinker 1.5 delivers trillion-parameter reasoning performance with just 30B parameters. Explore its Scientist Mode, $0.07 inference cost, and open-weight MIT license.
The era of "bigger is better" is facing a serious challenge. MiroMind has just dropped MiroThinker 1.5, a reasoning model with just 30 billion parameters that's punching way above its weight class. It's rivaling trillion-parameter competitors like Kimi K2 at a fraction of the cost.
Enterprises have long struggled with a dilemma: pay for expensive frontier model APIs or settle for mediocre local performance. MiroThinker 1.5 offers a third path. As an open-weight model architected for extended tool use and multi-step reasoning, it's a game-changer for the push toward deployable AI agents.
MiroThinker 1.5 Performance: Crushing the Hallucination Problem
The secret sauce is what MiroMind calls "Scientist Mode." Most LLMs hallucinate because they rely on memorized patterns. In contrast, MiroThinker 1.5 executes a verifiable research loop: it proposes hypotheses, queries external sources, identifies mismatches, and revises its conclusions.
| Metric | MiroThinker 1.5 (30B) | Kimi K2 (1T+) |
|---|---|---|
| BrowseComp-ZH Score | 69.8 | ~68.0 |
| Inference Cost/Call | $0.07 | ~$1.40 |
| Tool Calls/Session | Up to 400 | Varies |
On the BrowseComp-ZH benchmark, the 30B model actually outperformed its trillion-parameter rivals with a score of 69.8. Even more impressive is the price tag: inference costs as low as $0.07 per call, which is roughly 1/20th the cost of its peers.
Advanced Specs for Enterprise AI Deployment
MiroMind also introduced a 235B variant using a Mixture-of-Experts (MoE) architecture with only 22B active parameters. This model approaches the performance of systems like Gemini 3 Pro and GPT-5-class models. Key features include:
- Massive context window of 256k tokens
- Support for up to 400 tool calls per session
- Time-Sensitive Training Sandbox to eliminate hindsight bias
- Available under the permissive MIT License
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Databricks unveils Instructed Retriever, boosting RAG performance by 70%. Learn how this new architecture solves metadata reasoning for enterprise AI agents.
Silicon Valley giants are battling to dominate the 2026 AI operating systems race. Explore how AI agents are threatening the app-based business model and why OpenAI's Jerry Tworek is departing.
Replit CEO Amjad Masad reveals how Vibe Coding and specialized AI agents will replace generic 'slop' and traditional dev roadmaps by 2026.
Explore how the Claude Code Ralph Wiggum plugin is revolutionizing AI development in 2026. Learn about the 'Stop Hook' mechanism and how it enables autonomous coding night shifts.