OpenAI confirms that threat actors use ChatGPT to create malware.

OpenAI confirms that threat actors use ChatGPT to create malware.

In a significant revelation, OpenAI has confirmed that its AI-powered tool, ChatGPT, has been exploited by malicious actors to write and debug malware, evade detection, and perform spear-phishing campaigns. This disclosure comes after OpenAI disrupted over 20 cyber operations this year, marking the first official confirmation of how generative AI models like ChatGPT are being co-opted for malicious purposes. Cyber threat actors have found ways to leverage AI's potential to enhance the effectiveness of offensive operations, making it easier for even less experienced hackers to develop and execute cyberattacks.

This article delves deep into the growing threat landscape involving AI tools, with detailed accounts of attacks carried out by Chinese, Iranian, and other adversaries.

1. OpenAI’s Role in Stopping Malicious Cyber Operations

OpenAI’s report, released in October 2024, focuses on the various ways ChatGPT has been misused in cyber operations. Over the past year, the organization has taken action to curtail more than 20 identified cases where its AI tool was used to develop malware or assist in cyberattacks. These cases demonstrate that threat actors are evolving and adapting AI technology to their advantage.

Key Malicious Activities:

Developing malware: Threat actors used ChatGPT to write scripts that form part of multi-stage infection chains.
Debugging code: Hackers employed ChatGPT to help them identify and resolve coding errors in their malicious programs.
Evading detection: By using AI to obfuscate scripts and code, attackers made it harder for security systems to flag their activities.
2. Early Warnings: Initial Reports of AI Use in Cyber Attacks

The first sign of AI-powered cyberattacks was reported by Proofpoint in April 2024. Proofpoint linked the cybercrime group TA547 (also known as "Scully Spider") to using an AI-written PowerShell loader to deliver the Rhadamanthys info-stealer. This attack marked one of the earliest confirmations that cybercriminals were adopting AI tools to streamline and refine their operations.

Shortly after, in September, HP Wolf Security researchers found evidence that cybercriminals targeting French users were using AI tools to write scripts as part of complex malware campaigns. This pattern of AI involvement in cyberattacks has become more prevalent, raising alarm across cybersecurity communities.

3. Confirmed Exploitation of ChatGPT by Chinese and Iranian Threat Actors

OpenAI's report further identified two prominent threat groups—SweetSpecter (China) and CyberAv3ngers (Iran)—actively utilizing ChatGPT to facilitate their operations. These actors were able to weaponize AI for reconnaissance, malware development, and social engineering purposes.

SweetSpecter (China)

First reported by Cisco Talos in 2023, SweetSpecter is a Chinese cyber-espionage group targeting governments across Asia. OpenAI discovered that SweetSpecter used ChatGPT accounts to:

Conduct reconnaissance on vulnerabilities in popular applications and content management systems.
Request guidance on exploiting the Log4Shell vulnerability in Log4j.
Develop spear-phishing campaigns by crafting deceptive job recruitment messages and ZIP file attachments, which led to infection chains.

The group directly targeted OpenAI by sending phishing emails to the company’s employees, disguised as support requests. If opened, these emails deployed malware, specifically the SugarGh0st RAT, onto victim systems.

CyberAv3ngers (Iran)

CyberAv3ngers, affiliated with Iran’s Islamic Revolutionary Guard Corps (IRGC), is known for targeting industrial systems in critical infrastructure sectors across Western countries. The group exploited ChatGPT for:

Vulnerability research: They sought information on vulnerabilities in industrial control systems like Programmable Logic Controllers (PLCs).
Malware development: They used AI tools to create custom bash and Python scripts, obfuscate code, and debug malware targeting macOS systems.
Post-compromise activity: The group inquired how to use AI to steal user passwords and perform data exfiltration from compromised systems.
4. Techniques and Tools Used by Cybercriminals with ChatGPT

Several examples from OpenAI’s report reveal how ChatGPT has been leveraged to enhance offensive cyber operations. These techniques fall into several categories:

LLM-Informed Reconnaissance: Threat actors asked the AI to provide information about vulnerabilities, applications, industrial devices, and default passwords for widely used hardware.
LLM-Enhanced Scripting: AI was used to help attackers write, debug, and obfuscate malware scripts, making it easier to deploy.
LLM-Aided Social Engineering: ChatGPT was employed to craft realistic phishing emails, job offers, and other forms of communication designed to deceive targets.

The following table highlights specific actions that attackers requested from ChatGPT:

Activity	Category
Research vulnerabilities in Log4j versions	LLM-informed reconnaissance
Exploit car manufacturer infrastructure	LLM-assisted vulnerability research
Obfuscate malicious VBA scripts	LLM-enhanced anomaly detection evasion
Debug Android malware	LLM-aided development
Develop C&C infrastructure for malware	LLM-aided development
5. Case Study: Storm-0817 and AI-Assisted Malware Development

The third major threat group highlighted in OpenAI’s report is Storm-0817, also based in Iran. This group used ChatGPT extensively for debugging and developing malware. Some of the key activities conducted by Storm-0817 include:

Debugging malware: The group requested help with debugging Android malware.
Developing Command-and-Control (C&C) infrastructure: Storm-0817 used AI to build server-side code that supported malware’s C&C connections.
Translation and reconnaissance: ChatGPT was also used to translate LinkedIn profiles of cybersecurity professionals and scrape Instagram for data.

The malware developed by this group was highly sophisticated, capable of stealing contact lists, browsing history, and precise geolocation from infected devices. The C&C infrastructure was hosted on a WAMP server using a compromised domain.

6. Implications of AI-Powered Cyber Attacks

OpenAI’s confirmation of these attacks highlights the significant risks posed by AI tools in the cybersecurity landscape. While generative AI has the potential to enhance productivity and innovation, it also lowers the barrier to entry for cybercriminals, allowing even low-skilled actors to engage in advanced attacks.

Key Implications:

Increased efficiency for attackers: AI makes the development and execution of malware faster and more efficient.
Diverse application: AI tools can be used in every stage of an attack, from reconnaissance and phishing to malware development and post-compromise activities.
Challenge for defenders: Traditional cybersecurity tools may struggle to detect AI-generated code or scripts, requiring new approaches and enhanced detection methods.
7. Conclusion

The recent report from OpenAI underscores the growing threat of AI being co-opted for malicious purposes. While AI holds tremendous promise for innovation, the ability for cybercriminals to use these tools to scale and enhance attacks is concerning. Cybersecurity measures must evolve quickly to address these challenges, ensuring that AI-driven advancements don’t become a double-edged sword.

FAQs

1. How can ChatGPT be used to write malware?

ChatGPT can assist threat actors by writing, debugging, and obfuscating code, making it easier to create malware with less technical expertise.

2. Is ChatGPT able to perform advanced cyberattacks?

ChatGPT itself doesn't perform attacks but can assist malicious actors by providing information and code for vulnerabilities, social engineering, and malware creation.

3. What has OpenAI done to mitigate this threat?

OpenAI has banned all accounts associated with malicious activity, shared threat indicators with partners, and is working to improve safeguards to prevent misuse of its tools.

4. Can AI help in defending against cyberattacks?

Yes, AI can be a valuable tool for cybersecurity by identifying patterns, automating threat detection, and quickly responding to emerging threats.

5. What is SweetSpecter, and how did it target OpenAI?

SweetSpecter is a Chinese cyber-espionage group that used ChatGPT to craft spear-phishing campaigns targeting OpenAI employees, aiming to drop malware on their systems.

6. How can Technijian help in combating AI-based cyber threats?

Technijian can help businesses strengthen their cybersecurity posture by offering advanced AI-driven security solutions, real-time threat monitoring, and expert consultation to defend against AI-enhanced attacks.

In a significant revelation, OpenAI has confirmed that its AI-powered tool, ChatGPT, has been exploited by malicious actors to write and debug malware, evade detection, and perform spear-phishing campaigns. This disclosure comes after OpenAI disrupted over 20 cyber operations this year, marking the first official confirmation of how generative AI models like ChatGPT are being co-opted for malicious purposes. Cyber threat actors have found ways to leverage AI’s potential to enhance the effectiveness of offensive operations, making it easier for even less experienced hackers to develop and execute cyberattacks.

This article delves deep into the growing threat landscape involving AI tools, with detailed accounts of attacks carried out by Chinese, Iranian, and other adversaries.


1. OpenAI’s Role in Stopping Malicious Cyber Operations

OpenAI’s report, released in October 2024, focuses on the various ways ChatGPT has been misused in cyber operations. Over the past year, the organization has taken action to curtail more than 20 identified cases where its AI tool was used to develop malware or assist in cyberattacks. These cases demonstrate that threat actors are evolving and adapting AI technology to their advantage.

Key Malicious Activities:

  • Developing malware: Threat actors used ChatGPT to write scripts that form part of multi-stage infection chains.
  • Debugging code: Hackers employed ChatGPT to help them identify and resolve coding errors in their malicious programs.
  • Evading detection: By using AI to obfuscate scripts and code, attackers made it harder for security systems to flag their activities.

2. Early Warnings: Initial Reports of AI Use in Cyber Attacks

The first sign of AI-powered cyberattacks was reported by Proofpoint in April 2024. Proofpoint linked the cybercrime group TA547 (also known as “Scully Spider”) to using an AI-written PowerShell loader to deliver the Rhadamanthys info-stealer. This attack marked one of the earliest confirmations that cybercriminals were adopting AI tools to streamline and refine their operations.

Shortly after, in September, HP Wolf Security researchers found evidence that cybercriminals targeting French users were using AI tools to write scripts as part of complex malware campaigns. This pattern of AI involvement in cyberattacks has become more prevalent, raising alarm across cybersecurity communities.


3. Confirmed Exploitation of ChatGPT by Chinese and Iranian Threat Actors

OpenAI’s report further identified two prominent threat groups—SweetSpecter (China) and CyberAv3ngers (Iran)—actively utilizing ChatGPT to facilitate their operations. These actors were able to weaponize AI for reconnaissance, malware development, and social engineering purposes.

SweetSpecter (China)

First reported by Cisco Talos in 2023, SweetSpecter is a Chinese cyber-espionage group targeting governments across Asia. OpenAI discovered that SweetSpecter used ChatGPT accounts to:

  • Conduct reconnaissance on vulnerabilities in popular applications and content management systems.
  • Request guidance on exploiting the Log4Shell vulnerability in Log4j.
  • Develop spear-phishing campaigns by crafting deceptive job recruitment messages and ZIP file attachments, which led to infection chains.

The group directly targeted OpenAI by sending phishing emails to the company’s employees, disguised as support requests. If opened, these emails deployed malware, specifically the SugarGh0st RAT, onto victim systems.

CyberAv3ngers (Iran)

CyberAv3ngers, affiliated with Iran’s Islamic Revolutionary Guard Corps (IRGC), is known for targeting industrial systems in critical infrastructure sectors across Western countries. The group exploited ChatGPT for:

  • Vulnerability research: They sought information on vulnerabilities in industrial control systems like Programmable Logic Controllers (PLCs).
  • Malware development: They used AI tools to create custom bash and Python scripts, obfuscate code, and debug malware targeting macOS systems.
  • Post-compromise activity: The group inquired how to use AI to steal user passwords and perform data exfiltration from compromised systems.

4. Techniques and Tools Used by Cybercriminals with ChatGPT

Several examples from OpenAI’s report reveal how ChatGPT has been leveraged to enhance offensive cyber operations. These techniques fall into several categories:

  • LLM-Informed Reconnaissance: Threat actors asked the AI to provide information about vulnerabilities, applications, industrial devices, and default passwords for widely used hardware.
  • LLM-Enhanced Scripting: AI was used to help attackers write, debug, and obfuscate malware scripts, making it easier to deploy.
  • LLM-Aided Social Engineering: ChatGPT was employed to craft realistic phishing emails, job offers, and other forms of communication designed to deceive targets.

The following table highlights specific actions that attackers requested from ChatGPT:

Activity Category
Research vulnerabilities in Log4j versions LLM-informed reconnaissance
Exploit car manufacturer infrastructure LLM-assisted vulnerability research
Obfuscate malicious VBA scripts LLM-enhanced anomaly detection evasion
Debug Android malware LLM-aided development
Develop C&C infrastructure for malware LLM-aided development

5. Case Study: Storm-0817 and AI-Assisted Malware Development

The third major threat group highlighted in OpenAI’s report is Storm-0817, also based in Iran. This group used ChatGPT extensively for debugging and developing malware. Some of the key activities conducted by Storm-0817 include:

  • Debugging malware: The group requested help with debugging Android malware.
  • Developing Command-and-Control (C&C) infrastructure: Storm-0817 used AI to build server-side code that supported malware’s C&C connections.
  • Translation and reconnaissance: ChatGPT was also used to translate LinkedIn profiles of cybersecurity professionals and scrape Instagram for data.

The malware developed by this group was highly sophisticated, capable of stealing contact lists, browsing history, and precise geolocation from infected devices. The C&C infrastructure was hosted on a WAMP server using a compromised domain.


6. Implications of AI-Powered Cyber Attacks

OpenAI’s confirmation of these attacks highlights the significant risks posed by AI tools in the cybersecurity landscape. While generative AI has the potential to enhance productivity and innovation, it also lowers the barrier to entry for cybercriminals, allowing even low-skilled actors to engage in advanced attacks.

Key Implications:

  • Increased efficiency for attackers: AI makes the development and execution of malware faster and more efficient.
  • Diverse application: AI tools can be used in every stage of an attack, from reconnaissance and phishing to malware development and post-compromise activities.
  • Challenge for defenders: Traditional cybersecurity tools may struggle to detect AI-generated code or scripts, requiring new approaches and enhanced detection methods.

7. Conclusion

The recent report from OpenAI underscores the growing threat of AI being co-opted for malicious purposes. While AI holds tremendous promise for innovation, the ability for cybercriminals to use these tools to scale and enhance attacks is concerning. Cybersecurity measures must evolve quickly to address these challenges, ensuring that AI-driven advancements don’t become a double-edged sword.


FAQs

1. How can ChatGPT be used to write malware?

ChatGPT can assist threat actors by writing, debugging, and obfuscating code, making it easier to create malware with less technical expertise.

2. Is ChatGPT able to perform advanced cyberattacks?

ChatGPT itself doesn’t perform attacks but can assist malicious actors by providing information and code for vulnerabilities, social engineering, and malware creation.

3. What has OpenAI done to mitigate this threat?

OpenAI has banned all accounts associated with malicious activity, shared threat indicators with partners, and is working to improve safeguards to prevent misuse of its tools.

4. Can AI help in defending against cyberattacks?

Yes, AI can be a valuable tool for cybersecurity by identifying patterns, automating threat detection, and quickly responding to emerging threats.

5. What is SweetSpecter, and how did it target OpenAI?

SweetSpecter is a Chinese cyber-espionage group that used ChatGPT to craft spear-phishing campaigns targeting OpenAI employees, aiming to drop malware on their systems.

6. How can Technijian help in combating AI-based cyber threats?

Technijian can help businesses strengthen their cybersecurity posture by offering advanced AI-driven security solutions, real-time threat monitoring, and expert consultation to defend against AI-enhanced attacks.

About Technijian

Technijian is a premier provider of managed IT services in Orange County, delivering top-tier IT solutions designed to empower businesses to thrive in today’s fast-paced digital landscape. With a focus on reliability, security, and efficiency, we specialize in offering IT services that are tailored to meet the unique needs of businesses across Irvine, Anaheim, Riverside, San Bernardino, and Orange County.

Located in the heart of Irvine, Technijian has earned a reputation as a trusted managed service provider in Irvine for businesses seeking robust IT support. Our dedicated team of IT experts ensures that your technology infrastructure is always optimized, secure, and aligned with your business goals. Whether you require IT support in Irvine, IT support in Orange County, managed IT services in Irvine, or IT services in Orange County, we’ve got you covered. Our expertise also extends to providing managed IT services in Anaheim, IT support in Riverside, and IT consultant services in San Diego.

As a leader in IT support in Orange County, we understand the challenges businesses face when maintaining and advancing their IT environments. That’s why our comprehensive suite of services includes IT infrastructure management, IT support in Anaheim, IT help desk, and IT outsourcing services. With proactive monitoring, disaster recovery, and strategic consulting, our goal is to minimize downtime, enhance productivity, and provide IT security services that give you peace of mind.

At Technijian, we take pride in offering customized managed IT solutions that exceed client expectations. From small businesses to large enterprises, our IT services in Irvine are designed to scale with your needs and support your growth. We specialize in cloud services, IT systems management, business IT support, technology support services, IT network management, and enterprise IT support. Whether you’re looking for IT support in Riverside, IT solutions in San Diego, or managed services in Orange County, Technijian has the expertise to meet your requirements.

Our managed service providers in Orange County offer comprehensive solutions for every business need. Whether you need help with IT performance optimization, IT service management, or IT security solutions, we provide services that enable businesses to remain agile in today’s competitive market. Our IT support services in Orange County and managed IT services in Irvine ensure your operations remain secure, productive, and future-ready.

We also offer managed service provider services and IT support in Irvine, CA, focusing on delivering efficient and scalable IT services across Southern California. Technijian is committed to providing IT managed services in Irvine, IT support in Anaheim, and IT services in Orange County, CA that adapt to the ever-changing demands of business technology.

Experience the difference with Technijian—your trusted partner for IT consulting services, managed IT services, and IT support in Orange County. Let us guide you through the complexities of modern IT infrastructure and help you achieve your business objectives with confidence.

OpenAI confirms that threat actors use ChatGPT to create malware.
Technijian
OpenAI confirms that threat actors use ChatGPT to create malware.
Loading
/

Ravi JainAuthor posts

Technijian was founded in November of 2000 by Ravi Jain with the goal of providing technology support for small to midsize companies. As the company grew in size, it also expanded its services to address the growing needs of its loyal client base. From its humble beginnings as a one-man-IT-shop, Technijian now employs teams of support staff and engineers in domestic and international offices. Technijian’s US-based office provides the primary line of communication for customers, ensuring each customer enjoys the personalized service for which Technijian has become known.

Comments are disabled.