Security researchers have discovered the first ransomware that utilizes artificial intelligence (AI) to generate malicious code, making it easier for criminals with limited technical skills to launch cyberattacks.
ESET researchers Anton Cherepanov and Peter Strycek found the malware, called PromptLock, on August 26, 2025. The software uses OpenAI’s gpt-oss:20b model to generate harmful computer scripts automatically.
Instead of using pre-written code, PromptLock sends requests to an AI system that generates new attack scripts each time. This makes the malware harder to detect because it creates different code patterns with each use.
The ransomware targets computers running Windows, Linux, and macOS systems. It can search through files, steal data, and lock important documents using encryption. The malware uses the SPECK 128-bit algorithm to scramble files.
Written in the Go programming language, PromptLock connects to AI models through the Ollama API. It communicates with remote servers running the AI software rather than downloading large AI files.
Currently, PromptLock appears to be a proof-of-concept rather than active malware. ESET found samples uploaded to VirusTotal from the United States on August 25, but no real-world attacks have been reported.
“We believe it is our responsibility to inform the cybersecurity community about such developments,” the ESET researchers said.
The ransomware even includes instructions for creating custom ransom notes based on whether the infected machine is a personal computer, a company server, or an industrial control system.
This development worries cybersecurity experts because it removes technical barriers that previously limited who could create ransomware. Criminals no longer require extensive programming knowledge to develop sophisticated attacks. AI-powered attacks can adapt to various computer environments and adjust their tactics automatically, making them significantly more dangerous than current threats.
Recent research from Anthropic shows that cybercriminals are already using AI chatbots for malicious purposes. The company discovered threat actors utilizing its Claude AI system to develop multiple ransomware variants and conduct large-scale data theft operations.
“We’ve developed sophisticated safety and security measures to prevent the misuse of our AI models. But cybercriminals and other malicious actors are actively attempting to find ways around them,” Anthropic’s threat intelligence team wrote.
Security experts warn that this represents a new chapter in cybercrime. As AI tools become more powerful and accessible, criminals will likely find new ways to automate and improve their attacks.
“The rise of AI-powered malware represents a new frontier in cybersecurity,” Cherepanov wrote in a LinkedIn post. “By sharing these findings, we hope to spark discussion, preparedness, and further research across the industry.”
The discovery raises important questions about how AI companies can prevent their technology from being misused while still making it available for legitimate purposes.