Hackers Misusing ChatGPT To Write Malware: OpenAI Report

OpenAI, the company behind the popular AI chatbot ChatGPT, revealed that it has disrupted over 20 operations and deceptive networks worldwide since the beginning of 2024, including networks linked to Iranian and Chinese state-sponsored hackers.

In a report published on Wednesday, the generative AI (GenAI) company said that these operations involved using an AI-powered chatbot, ChatGPT, to debug malware, writing articles for websites, generating content posted by fake personas on social media accounts, and spreading election-related disinformation.

According to researchers Ben Nimmo and Michael Flossman, who wrote the report, the threat actors leveraged AI during the intermediate phases of their campaigns.

โ€œThreat actors most often used our models to perform tasks in a specific, intermediate phase of activityโ€”after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed โ€œfinishedโ€ products such as social media posts or malware across the internet,โ€ the researchers wrote in the report.

However, they emphasized that despite AI’s involvement in cyberattacks and influence campaigns, its actual impact has so far been limited.

They have found no evidence of AI contributing to the creation of new, advanced malware or significantly improving disinformation efforts.

OpenAI report highlights three key threat groups that have exploited ChatGPT to facilitate cyberattacks:

SweetSpectre: A China-linked group, SweetSpectre, used ChatGPT for reconnaissance, vulnerability research, scripting support, anomaly detection evasion, and development.

It was also found that spear phishing emails with malicious attachments were sent to the corporate and personal email accounts of some OpenAI employees, but they were blocked before reaching the targeted inboxes.

CyberAv3ngers: This group, linked to Iranโ€™s Islamic Revolutionary Guard Corps (IRGC), is known for its disruptive attacks against industrial control systems (ICS) and programmable logic controllers (PLCs) used in water systems, manufacturing, and energy systems.

The group used the ChatGPT model to research vulnerabilities, debug code, and ask for scripting advice to target infrastructure typically associated with Israel, the United States, or Ireland.

STORM-0817: Another Iran-based threat actor, STORM-0817, which OpenAI believes is the first time a hacker has been publicly identified using AI models.

This group attempted to use ChatGPT to debug malware, create tools like an Instagram scraper, translate LinkedIn profiles into Persian, and for debugging and coding support in implementing Android malware, along with the supporting command and control infrastructure.

Further, the code snippets created in attacker-supplied prompts with the help of OpenAI’s chatbot indicated that the malware could steal contacts, call logs, installed packages, median on external storage, screenshots, device IMEI and model, browsing history, latitude/longitude, files off external storage (pdf, excel docs), and content downloaded to external storage, including files sent by secure messaging apps like WhatsApp and IMO.

“In parallel, STORM-0817 used ChatGPT to support the development of server side code necessary to handle connections from compromised devices,”ย reads the Open AI report.

“This allowed us to see that the command and control server for this malware is a WAMP (Windows, Apache, MySQL & PHP/Perl/Python) setup and during testing was using the domain stickhero[.]pro.”

While these groups tried to exploit ChatGPT, OpenAI emphasized that AI did not provide them significant new capabilities for developing malware. The hackers were only able to gain incremental advantages, which were already achievable through publicly available, non-AI-powered tools.

Although AI’s involvement in cyberattacks is concerning, OpenAI has implemented measures to identify and disrupt these malicious activities.

โ€œAs we look to the future, we will continue to work across our intelligence, investigations, security research, and policy teams to anticipate how malicious actors may use advanced models for dangerous ends and to plan enforcement steps appropriately. We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with our industry peers and the broader research community to stay ahead of risks and strengthen our collective safety and security,โ€ the company concluded.

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!
spot_img

Read More

Suggested Post