Back to top
  • 공유 Share
  • 인쇄 Print
  • 글자크기 Font size
URL copied.

AI Models Now Capable of Autonomous Smart Contract Exploits, New Research Warns

AI Models Now Capable of Autonomous Smart Contract Exploits, New Research Warns. Source: Image by Gerd Altmann from Pixabay

Frontier artificial intelligence systems are getting strong enough to autonomously discover and weaponize vulnerabilities in smart contracts, according to new findings from the Anthropic Fellows program and the ML Alignment & Theory Scholars Program (MATS). Researchers evaluated advanced models using SCONE-bench, a dataset of 405 previously exploited contracts, and found that GPT-5, Claude Opus 4.5, and Sonnet 4.5 collectively generated the equivalent of $4.6 million in simulated exploits, even on contracts hacked after their knowledge cutoffs. The results provide a conservative estimate of the financial damage this generation of AI could inflict in real-world conditions.

The study revealed that these models are capable not just of detecting weaknesses but of producing full exploit scripts, sequencing complex transactions, and draining simulated liquidity in patterns that closely mirror real attacks seen across Ethereum and BNB Chain. Researchers also explored whether the models could spot undiscovered vulnerabilities in newer contracts. After scanning 2,849 recently deployed BNB Chain contracts, GPT-5 and Sonnet 4.5 identified two zero-day flaws worth $3,694 in simulated profit. One exploit relied on a missing view modifier that enabled balance inflation, while the other redirected fee withdrawals to arbitrary addresses. In both cases, the AI generated working scripts to convert those flaws into profit.

Although the dollar amounts were small, the implications are significant. The entire operation cost only $3,476, with an average run costing just $1.22—a clear sign that automated blockchain exploitation could scale rapidly as model performance improves and computation becomes cheaper. Researchers warn that this trend is likely to shorten the time between a contract’s deployment and the first attempted attack, particularly in fast-moving DeFi environments where capital is public and instantly extractable.

Importantly, the paper stresses that these capabilities aren’t limited to blockchain systems. The same reasoning abilities that allow AI models to exploit smart contracts could eventually be applied to traditional software, private codebases, and broader crypto-market infrastructure. As automated scanning becomes more accessible, the attack surface expands far beyond decentralized finance.

The authors frame their results as a wake-up call for developers, pointing out that AI systems can already execute tasks that once required highly skilled human hackers. With autonomous exploitation no longer theoretical, the urgent question for the crypto industry is how quickly defensive technologies can evolve to keep pace.

<Copyright ⓒ TokenPost, unauthorized reproduction and redistribution prohibited>

Most Popular

Comment 0

Comment tips

Great article. Requesting a follow-up. Excellent analysis.

0/1000

Comment tips

Great article. Requesting a follow-up. Excellent analysis.
1