MiroThinker 1.5: The 30B Open-Weight Model Outperforming Trillion-Parameter Giants
MiroMind's new MiroThinker 1.5 delivers trillion-parameter reasoning performance with just 30B parameters. Explore its Scientist Mode, $0.07 inference cost, and open-weight MIT license.
The era of "bigger is better" is facing a serious challenge. MiroMind has just dropped MiroThinker 1.5, a reasoning model with just 30 billion parameters that's punching way above its weight class. It's rivaling trillion-parameter competitors like Kimi K2 at a fraction of the cost.
Enterprises have long struggled with a dilemma: pay for expensive frontier model APIs or settle for mediocre local performance. MiroThinker 1.5 offers a third path. As an open-weight model architected for extended tool use and multi-step reasoning, it's a game-changer for the push toward deployable AI agents.
MiroThinker 1.5 Performance: Crushing the Hallucination Problem
The secret sauce is what MiroMind calls "Scientist Mode." Most LLMs hallucinate because they rely on memorized patterns. In contrast, MiroThinker 1.5 executes a verifiable research loop: it proposes hypotheses, queries external sources, identifies mismatches, and revises its conclusions.
| Metric | MiroThinker 1.5 (30B) | Kimi K2 (1T+) |
|---|---|---|
| BrowseComp-ZH Score | 69.8 | ~68.0 |
| Inference Cost/Call | $0.07 | ~$1.40 |
| Tool Calls/Session | Up to 400 | Varies |
On the BrowseComp-ZH benchmark, the 30B model actually outperformed its trillion-parameter rivals with a score of 69.8. Even more impressive is the price tag: inference costs as low as $0.07 per call, which is roughly 1/20th the cost of its peers.
Advanced Specs for Enterprise AI Deployment
MiroMind also introduced a 235B variant using a Mixture-of-Experts (MoE) architecture with only 22B active parameters. This model approaches the performance of systems like Gemini 3 Pro and GPT-5-class models. Key features include:
- Massive context window of 256k tokens
- Support for up to 400 tool calls per session
- Time-Sensitive Training Sandbox to eliminate hindsight bias
- Available under the permissive MIT License
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
A surprise leak of Anthropic's Claude Code source code revealed 'Kairos'—a dormant background AI agent designed to act before you even ask. Here's what it means.
At GTC 2026, Nvidia is expected to unveil an inference chip and the NemoClaw AI agent platform. What happens when the company that owns 80% of AI training comes for the rest of the stack?
Rox just hit a $1.2B valuation on $8M ARR. The AI sales agent startup thinks it can replace a dozen fragmented tools—and maybe a few salespeople. Here's what's actually at stake.
Meta has acquired Moltbook, a simulated social network where AI agents interact with each other. Here's what that means for the future of social media, agentic AI, and your feed.
Thoughts
Share your thoughts on this article
Sign in to join the conversation