AI Detects 92% of DeFi Hacks Before They Happen
Specialized AI security system detected vulnerabilities in 92% of real-world DeFi exploits worth $96.8M, while hackers increasingly use AI to automate attacks at just $1.22 per attempt.
What if $96.8 million in DeFi hacks could have been prevented? New research suggests they could have been – if projects had used the right AI.
A specialized AI security system detected vulnerabilities in 92% of real-world exploited DeFi contracts, dramatically outperforming general-purpose tools that caught just 34% of the same flaws.
The Numbers Tell a Story
Cecuro, an AI security firm, analyzed 90 smart contracts actually exploited between October 2024 and early 2026, representing $228 million in verified losses. Their purpose-built system flagged vulnerabilities tied to $96.8 million in exploit value, while a baseline GPT-5.1 coding agent running on the same underlying model detected issues worth only $7.5 million.
The gap wasn't about AI horsepower – both systems used identical frontier models. The difference was methodology: domain-specific security phases and DeFi-focused heuristics layered on top.
The Hacker's Advantage
But here's the unsettling part: hackers are getting AI upgrades too. Separate research from Anthropic and OpenAI shows AI agents can now execute end-to-end exploits on vulnerable smart contracts. The cost? Just $1.22 per contract attempt.
That's a game-changer for large-scale scanning attacks. AI exploit capability is reportedly doubling every 1.3 months, and bad actors like North Korean groups are already using AI to automate parts of their hacking operations.
Audited and Still Exploited
Perhaps most troubling: several contracts in the dataset had undergone professional audits before being exploited. This suggests the current playbook – one-off audits plus general-purpose AI tools – may be fundamentally inadequate against sophisticated, high-value vulnerabilities.
The research team open-sourced their benchmark dataset and evaluation framework on GitHub, but held back their full security agent. The reason? Concern that similar tooling could be weaponized for attacks.
The Arms Race Accelerates
We're witnessing an AI security arms race in real-time. While specialized defensive AI can catch 92% of vulnerabilities, offensive AI capabilities are scaling even faster. The average exploit attempt cost has plummeted to pocket change, democratizing sophisticated attacks.
This creates a peculiar dynamic: the same AI advances that could protect DeFi are simultaneously making it easier to attack. The question isn't whether AI will transform blockchain security – it already has.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
RingCentral and Five9 surge 34% and 14% respectively, showing AI can boost rather than kill software businesses. A blueprint for survival in the AI era.
Pentagon's cybersecurity requirements are pricing out small defense contractors, reshaping the industry landscape. Security vs competition - what's the real cost?
An AI coding bot took down a major Amazon service, exposing the hidden risks of automated development. What this means for the future of AI-powered coding.
India positions itself as the world's AI 'use-case capital' with massive investment pledges, but the gap between pilot projects and mass deployment reveals deeper challenges ahead.
Thoughts
Share your thoughts on this article
Sign in to join the conversation