
AI-Powered Malware Threatens Crypto Security
Google’s Threat Intelligence Group has uncovered a disturbing new trend in cybercrime: state-linked hackers are now using large language models to create and modify malware specifically designed to steal cryptocurrency. The latest research reveals at least five distinct malware families leveraging AI models like Gemini and Qwen2.5-Coder during execution, marking a significant evolution in how cybercriminals deploy artificial intelligence in live operations.
North Korean Hackers Exploit AI for Crypto Theft
The report identifies UNC1069, a North Korean threat actor known for cryptocurrency theft campaigns, as actively misusing Gemini AI to enhance their operations. This group has been documented using social engineering tactics and language related to computer maintenance to target cryptocurrency holders and exchanges.
UNC1069’s AI-Powered Tactics
According to Google’s findings, the North Korean group’s queries to Gemini included specific instructions for locating wallet application data, generating scripts to access encrypted storage, and composing multilingual phishing content aimed at crypto exchange employees. These activities appear to be part of a broader attempt to build code capable of stealing digital assets efficiently.
Just-in-Time Code Creation
The malware families identified in the report demonstrate a technique Google calls “just-in-time code creation,” where malicious software dynamically generates scripts and obfuscates its own code during runtime. This represents a fundamental shift from traditional malware design, where malicious logic is typically hard-coded into the binary.
Technical Analysis of AI-Enabled Malware
Google’s technical brief details how two specific malware families—PROMPTFLUX and PROMPTSTEAL—integrate AI models directly into their operations. PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API every hour to rewrite its own VBScript code, while PROMPTSTEAL, linked to Russia’s APT29, uses the Qwen model on Hugging Face to generate Windows commands on demand.
Advanced Evasion Techniques
By outsourcing parts of its functionality to AI models, the malware can continuously make changes to harden itself against detection systems. This dynamic approach allows the malicious code to evolve in real-time, making traditional signature-based detection methods less effective against these new threats.
Google’s Response and Security Measures
Google has taken immediate action against these threats, disabling accounts tied to the malicious activities and implementing new safeguards to limit model abuse. The company has introduced refined prompt filters and tighter monitoring of API access to prevent similar misuse in the future.
Implications for Crypto Security
The emergence of AI-powered malware represents a new attack surface where malicious software can query LLMs at runtime to locate wallet storage, generate bespoke exfiltration scripts, and craft highly credible phishing lures. This development could fundamentally change how security professionals approach threat modeling and attribution in the cryptocurrency space, requiring more sophisticated defense mechanisms to protect digital assets from evolving AI-driven threats.




