Emoji-Riddled Code and a $12M Crypto Heist
North Korean hackers used ChatGPT, Cursor, and AI web tools to steal $12M in crypto in 90 days—without knowing how to code. What this means for cybersecurity's future.
The malware was riddled with emojis. 💰🔑✅. Professional coders don't do that. Security researcher Marcus Hutchins knew immediately: a human didn't write this.
Behind those emojis was North Korea.
2,000 Machines. 90 Days. $12 Million.
On Wednesday, cybersecurity firm Expel pulled back the curtain on a North Korean state-sponsored operation it calls HexagonalRodent. In roughly three months, the group infected more than 2,000 computers with credential-stealing malware and made off with an estimated $12 million in cryptocurrency.
The targets weren't random. The group zeroed in on developers working on small-scale crypto launches, NFT projects, and Web3 ventures—people who typically work on personal machines without enterprise security tools watching over them. The attack vector was a fake job offer from a fake tech company (complete with a full AI-generated website), followed by a "coding test" file laced with malware. Once downloaded and run, it siphoned off login credentials—including, in some cases, the keys to crypto wallets.
The hackers were also sloppy enough to leave parts of their own infrastructure unsecured, which let Expel trace the operation back to them and estimate the total haul by analyzing an exposed database tracking victim wallets.
How AI Turned Amateurs Into Operators
Here's where this stops being just another North Korea hacking story.
Hutchins—who became famous in security circles for single-handedly stopping the WannaCry ransomware worm—analyzed the malware samples and found something telling. The code was thoroughly annotated in English. North Korean developers don't typically write comments in English. And then there were the emojis—a documented fingerprint of large language model-generated code, since people typing on a PC keyboard rarely stop to insert them.
The hackers' own leaked infrastructure confirmed it. Prompts used to write the malware with ChatGPT and Cursor were left exposed. The fake recruitment websites were built using Anima, an AI web design tool. Essentially, AI handled nearly every layer of the operation—from the social engineering front to the malicious payload.
"These operators don't have the skills to write code. They don't have the skills to set up infrastructure," Hutchins told WIRED. "AI is actually enabling them to do things that they otherwise just would not be able to do."
Critically, the malware wasn't sophisticated. Standard endpoint detection and response (EDR) tools—the kind deployed across most corporations and government agencies—would have caught it. But individual developers working on indie crypto projects rarely have EDR. The hackers didn't need to outsmart enterprise security. They just needed to find the gap where it didn't exist. "They found a niche where you actually can get away with completely AI-generated malware," Hutchins said.
A Hermit Kingdom, Scaling Up
HexagonalRodent is a single thread in a much larger web. North Korea's cyber operations span ransomware, espionage, large-scale crypto theft, and an elaborate scheme of placing IT workers at Western tech companies under false identities. Security researchers describe the whole apparatus as a "state-sanctioned crime syndicate"—one that funds nuclear weapons development and helps the regime evade international sanctions.
AI is now woven into nearly every layer of it. Microsoft researchers have spotted suspected North Korean operators using AI to fabricate IDs, research vulnerabilities, and polish English for social engineering. In February 2025, OpenAI said it had banned suspected North Korean accounts using ChatGPT during fraudulent job interviews—generating technical answers on the fly and writing code after successfully infiltrating companies. Anthropic reported in August that it had detected North Korean IT workers who "appear unable to perform basic technical tasks or professional communication without AI assistance."
North Korea has also reportedly stood up Research Center 227, an organization under the military's Reconnaissance General Bureau specifically tasked with developing AI-focused hacking tools.
What's striking isn't just the adoption of AI—it's the structural logic behind it. North Korea has a limited pool of genuinely skilled hackers, given that most citizens have never had meaningful internet access. But it has a large supply of low-skill IT labor it can mobilize and deploy. AI closes that skill gap at scale.
"They have hundreds of people being sent over the border to work in IT operations, and only a few of them really know what they're doing," Hutchins said. "But then they're able to use generative AI to get a leg up." Expel estimates that as many as 31 individual operators were involved in HexagonalRodent alone. AI didn't reduce headcount. It made a larger, less-skilled workforce viable.
What the AI Companies Say
The response from the platforms involved has been measured. OpenAI told WIRED its tools gave the hackers no "novel capabilities," but acknowledged the "value" was "speed and scale." The company did not confirm whether it had banned accounts specifically tied to Expel's findings. Cursor said it had blocked the HexagonalRodent hackers and is "investigating further" in coordination with other model providers. Anima said it was working with Expel to identify and block the actors, with its CEO calling it "misuse of Anima's coding agent by bad actors."
None of these responses are wrong, exactly. But they point to a structural tension that no single company can resolve unilaterally: commercial AI tools are built for accessibility, and accessibility is precisely what makes them useful to low-skill threat actors.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Anthropic's AI cybersecurity model is reportedly available to the NSA and Commerce Department—but not to CISA, the agency responsible for defending US federal infrastructure. What that gap reveals.
After two months of bitter conflict, Anthropic and the Trump administration may be thawing—thanks to a new cybersecurity AI model. What does it mean when principle meets political pressure?
A disgruntled security researcher published working exploit code for three unpatched Windows Defender vulnerabilities. Hackers weaponized it within days. Here's what it means for everyone running Windows.
Google's Project Zero proved Pixel modem firmware can be remotely exploited. The fix for Pixel 10? Rust. Here's why that matters—and why the rest of the industry is watching.
Thoughts
Share your thoughts on this article
Sign in to join the conversation