ChatGPT Can Create Mutating Malware That Antiviruses Can’t Detect

Researchers at the cybersecurity company, CyberArk Labs are warning that OpenAI’s new and popular AI-driven text generator, ChatGPT can be used to create a ‘highly advanced’ polymorphic malware that doesn’t contain any malicious code at all.

This advanced type of malicious program can not only easily evade security products but also make it hard to detect and mitigate with very little effort or investment by the attacker.

A polymorphic virus sometimes referred to as a metamorphic virus, is a type of malware that uses a polymorphic engine to mutate while keeping the original algorithm intact.

This means the code changes itself every time it runs, but the function of the code (its semantics) does not change at all, making it difficult to be detected by many traditional cybersecurity tools, such as antivirus or antimalware solutions.

CyberArk researchers, Eran Shimony And Omer Tsarfati created a proof-of-concept (POC) to show how it is possible to bypass ChatGPT’s built-in content filters that are designed to restrict access to certain content types or protect users from potentially harmful or inappropriate material.

For those unaware, ChatGPT (Generative Pre-trained Transformer) is an AI-powered chatbot developed by the Artificial Intelligence (AI) research company OpenAI, that uses natural language processing (NLP) to generate human-like text in response to prompts. The chatbot can be used for a variety of NLP tasks such as language translation, text summarization, and question answering.

In order to explain how the malware could be created, the researchers decided to start by asking ChatGPT to write code “injecting [sic] a shellcode into ‘explorer.exe’ in Python.” While the content filter was triggered, ChatGPT refused to execute the request.

The chatbot responded by saying that it is not appropriate or safe to write code that injects shellcode into a running process, as it could cause harm to the system and potentially compromise security.

The researchers then decided to bypass the built-in content filters by repeating and rephrasing their requests and demanding ChatGPT to follow the rules. They asked the chatbot to perform the task using multiple constraints and asked it to obey, after which the researchers received a functional code.

They further noted that, unlike the web version, the ChatGPT system did not utilize its content filter when using the API, which is unclear to the researchers too. However, this made their task much easier as the web version couldn’t process more complex requests.

Shimony and Tsarfati used the ChatGPT to repeatedly mutate the original code and successfully create multiple variations of the same threat.

“In other words, we can mutate the output on a whim, making it unique every time. Moreover, adding constraints like changing the use of a specific API call makes security products’ lives more difficult,” they wrote in their technical blog post that was itself apparently written by AI.

“One of the powerful capabilities of ChatGPT is the ability to easily create and continually mutate injectors. By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect.”

The researchers suggest that by requesting specific functionality such as code injection, file encryption, or persistence from ChatGPT, one can easily obtain new code or modify existing code.

“This results in polymorphic malware that does not exhibit malicious behavior while stored on disk and often does not contain suspicious logic while in memory,” they wrote.

While the concept of creating polymorphic malware using ChatGPT may seem daunting, the researchers claim that various persistence techniques, Anti-VM modules, and other malicious payloads can be generated by utilizing ChatGPT’s ability. This can allow attackers to develop a vast range of malware.

“This high level of modularity and adaptability makes it highly evasive to security products that rely on signature-based detection and will be able to bypass measures such as Anti-Malware Scanning Interface (AMSI),” they added.

The researchers did not disclose the details of communication with the command and control (C2) server but added that there are several ways this can be done discreetly without raising suspicion.

Shimony and Tsarfati plan to expand and elaborate more on this in the future and look to release some of the malware source code for learning purposes.

“As we have seen, the use of ChatGPT’s API within malware can present significant challenges for security professionals. It’s important to remember, this is not just a hypothetical scenario but a very real concern,” the researchers concluded.

Subscribe to our newsletter

To be updated with all the latest news

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news

Read More

Suggested Post