OpenAI confirms that threat actors use ChatGPT to create malware. In a significant revelation, OpenAI has confirmed that its AI-powered tool, ChatGPT, has been exploited by malicious actors to write and debug malware, evade detection, and perform spear-phishing campaigns. This disclosure comes after OpenAI disrupted over 20 cyber operations this year, marking the first official confirmation of how generative AI models like ChatGPT are being co-opted for malicious purposes. Cyber threat actors have found ways to leverage AI's potential to enhance the effectiveness of offensive operations, making it easier for even less experienced hackers to develop and execute cyberattacks. This article delves deep into the growing threat landscape involving AI tools, with detailed accounts of attacks carried out by Chinese, Iranian, and other adversaries. 1. OpenAI’s Role in Stopping Malicious Cyber Operations OpenAI’s report, released in October 2024, focuses on the various ways ChatGPT has been misused in cyber operations. Over the past year, the organization has taken action to curtail more than 20 identified cases where its AI tool was used to develop malware or assist in cyberattacks. These cases demonstrate that threat actors are evolving and adapting AI technology to their advantage. Key Malicious Activities: Developing malware: Threat actors used ChatGPT to write scripts that form part of multi-stage infection chains. Debugging code: Hackers employed ChatGPT to help them identify and resolve coding errors in their malicious programs. Evading detection: By using AI to obfuscate scripts and code, attackers made it harder for security systems to flag their activities. 2. Early Warnings: Initial Reports of AI Use in Cyber Attacks The first sign of AI-powered cyberattacks was reported by Proofpoint in April 2024. Proofpoint linked the cybercrime group TA547 (also known as "Scully Spider") to using an AI-written PowerShell loader to deliver the Rhadamanthys info-stealer. This attack marked one of the earliest confirmations that cybercriminals were adopting AI tools to streamline and refine their operations. Shortly after, in September, HP Wolf Security researchers found evidence that cybercriminals targeting French users were using AI tools to write scripts as part of complex malware campaigns. This pattern of AI involvement in cyberattacks has become more prevalent, raising alarm across cybersecurity communities. 3. Confirmed Exploitation of ChatGPT by Chinese and Iranian Threat Actors OpenAI's report further identified two prominent threat groups—SweetSpecter (China) and CyberAv3ngers (Iran)—actively utilizing ChatGPT to facilitate their operations. These actors were able to weaponize AI for reconnaissance, malware development, and social engineering purposes. SweetSpecter (China) First reported by Cisco Talos in 2023, SweetSpecter is a Chinese cyber-espionage group targeting governments across Asia. OpenAI discovered that SweetSpecter used ChatGPT accounts to: Conduct reconnaissance on vulnerabilities in popular applications and content management systems. Request guidance on exploiting the Log4Shell vulnerability in Log4j. Develop spear-phishing campaigns by crafting deceptive job recruitment messages and ZIP file attachments, which led to infection chains. The group directly targeted OpenAI by sending phishing emails to the company’s employees, disguised as support requests. If opened, these emails deployed malware, specifically the SugarGh0st RAT, onto victim systems. CyberAv3ngers (Iran) CyberAv3ngers, affiliated with Iran’s Islamic Revolutionary Guard Corps (IRGC), is known for targeting industrial systems in critical infrastructure sectors across Western countries. The group exploited ChatGPT for: Vulnerability research: They sought information on vulnerabilities in industrial control systems like Programmable Logic Controllers (PLCs). Malware development: They used AI tools to create custom bash and Python scripts, obfuscate code, and debug malware targeting macOS systems. Post-compromise activity: The group inquired how to use AI to steal user passwords and perform data exfiltration from compromised systems. 4. Techniques and Tools Used by Cybercriminals with ChatGPT Several examples from OpenAI’s report reveal how ChatGPT has been leveraged to enhance offensive cyber operations. These techniques fall into several categories: LLM-Informed Reconnaissance: Threat actors asked the AI to provide information about vulnerabilities, applications, industrial devices, and default passwords for widely used hardware. LLM-Enhanced Scripting: AI was used to help attackers write, debug, and obfuscate malware scripts, making it easier to deploy. LLM-Aided Social Engineering: ChatGPT was employed to craft realistic phishing emails, job offers, and other forms of communication designed to deceive targets. The following table highlights specific actions that attackers requested from ChatGPT: Activity Category Research vulnerabilities in Log4j versions LLM-informed reconnaissance Exploit car manufacturer infrastructure LLM-assisted vulnerability research Obfuscate malicious VBA scripts LLM-enhanced anomaly detection evasion Debug Android malware LLM-aided development Develop C&C infrastructure for malware LLM-aided development 5. Case Study: Storm-0817 and AI-Assisted Malware Development The third major threat group highlighted in OpenAI’s report is Storm-0817, also based in Iran. This group used ChatGPT extensively for debugging and developing malware. Some of the key activities conducted by Storm-0817 include: Debugging malware: The group requested help with debugging Android malware. Developing Command-and-Control (C&C) infrastructure: Storm-0817 used AI to build server-side code that supported malware’s C&C connections. Translation and reconnaissance: ChatGPT was also used to translate LinkedIn profiles of cybersecurity professionals and scrape Instagram for data. The malware developed by this group was highly sophisticated, capable of stealing contact lists, browsing history, and precise geolocation from infected devices. The C&C infrastructure was hosted on a WAMP server using a compromised domain. 6. Implications of AI-Powered Cyber Attacks OpenAI’s confirmation of these attacks highlights the significant risks posed by AI tools in the cybersecurity landscape. While generative AI has the potential to enhance productivity and innovation, it also lowers the barrier to entry for cybercriminals, allowing even low-skilled actors to engage in advanced attacks. Key Implications: Increased efficiency for attackers: AI makes the development and execution of malware faster and more efficient. Diverse application: AI tools can be used in every stage of an attack, from reconnaissance and phishing to malware development and post-compromise activities. Challenge for defenders: Traditional cybersecurity tools may struggle to detect AI-generated code or scripts, requiring new approaches and enhanced detection methods. 7. Conclusion The recent report from OpenAI underscores the growing threat of AI being co-opted for malicious purposes. While AI holds tremendous promise for innovation, the ability for cybercriminals to use these tools to scale and enhance attacks is concerning. Cybersecurity measures must evolve quickly to address these challenges, ensuring that AI-driven advancements don’t become a double-edged sword. FAQs 1. How can ChatGPT be used to write malware? ChatGPT can assist threat actors by writing, debugging, and obfuscating code, making it easier to create malware with less technical expertise. 2. Is ChatGPT able to perform advanced cyberattacks? ChatGPT itself doesn't perform attacks but can assist malicious actors by providing information and code for vulnerabilities, social engineering, and malware creation. 3. What has OpenAI done to mitigate this threat? OpenAI has banned all accounts associated with malicious activity, shared threat indicators with partners, and is working to improve safeguards to prevent misuse of its tools. 4. Can AI help in defending against cyberattacks? Yes, AI can be a valuable tool for cybersecurity by identifying patterns, automating threat detection, and quickly responding to emerging threats. 5. What is SweetSpecter, and how did it target OpenAI? SweetSpecter is a Chinese cyber-espionage group that used ChatGPT to craft spear-phishing campaigns targeting OpenAI employees, aiming to drop malware on their systems. 6. How can Technijian help in combating AI-based cyber threats? Technijian can help businesses strengthen their cybersecurity posture by offering advanced AI-driven security solutions, real-time threat monitoring, and expert consultation to defend against AI-enhanced attacks.