Liabooks Home|PRISM News
Microchip with a glowing magnifying glass symbolizing AI research capability
TechAI Analysis

MiroThinker 1.5: The 30B Open-Weight Model Outperforming Trillion-Parameter Giants

2 min readSource

MiroMind's new MiroThinker 1.5 delivers trillion-parameter reasoning performance with just 30B parameters. Explore its Scientist Mode, $0.07 inference cost, and open-weight MIT license.

The era of "bigger is better" is facing a serious challenge. MiroMind has just dropped MiroThinker 1.5, a reasoning model with just 30 billion parameters that's punching way above its weight class. It's rivaling trillion-parameter competitors like Kimi K2 at a fraction of the cost.

Enterprises have long struggled with a dilemma: pay for expensive frontier model APIs or settle for mediocre local performance. MiroThinker 1.5 offers a third path. As an open-weight model architected for extended tool use and multi-step reasoning, it's a game-changer for the push toward deployable AI agents.

MiroThinker 1.5 Performance: Crushing the Hallucination Problem

The secret sauce is what MiroMind calls "Scientist Mode." Most LLMs hallucinate because they rely on memorized patterns. In contrast, MiroThinker 1.5 executes a verifiable research loop: it proposes hypotheses, queries external sources, identifies mismatches, and revises its conclusions.

MetricMiroThinker 1.5 (30B)Kimi K2 (1T+)
BrowseComp-ZH Score69.8~68.0
Inference Cost/Call$0.07~$1.40
Tool Calls/SessionUp to 400Varies

On the BrowseComp-ZH benchmark, the 30B model actually outperformed its trillion-parameter rivals with a score of 69.8. Even more impressive is the price tag: inference costs as low as $0.07 per call, which is roughly 1/20th the cost of its peers.

Advanced Specs for Enterprise AI Deployment

MiroMind also introduced a 235B variant using a Mixture-of-Experts (MoE) architecture with only 22B active parameters. This model approaches the performance of systems like Gemini 3 Pro and GPT-5-class models. Key features include:

  • Massive context window of 256k tokens
  • Support for up to 400 tool calls per session
  • Time-Sensitive Training Sandbox to eliminate hindsight bias
  • Available under the permissive MIT License

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Related Articles