Microsoft and OpenAI say hackers are using ChatGPT for Cyberattacks

Microsoft and OpenAI have warned that nation-state hackers are weaponizing artificial intelligence (AI) and large language models (LLMs) to enhance their ongoing cyberattacks.

According to a study conducted by Microsoft Threat Intelligence in collaboration with OpenAI, the two companies have identified and disrupted five state-affiliated actors that looked to use AI services in support of malicious cyber activities.

These state-affiliated actors are associated with countries like Russia, North Korea, Iran, and China.

The five state-affiliated malicious actors included two China-affiliated threat actors known as Charcoal Typhoon (CHROMIUM) and Salmon Typhoon (SODIUM); the Iran-affiliated threat actor known as Crimson Sandstorm (CURIUM); the North Korea-affiliated actor known as Emerald Sleet (THALLIUM); and the Russia-affiliated actor known as Forest Blizzard (STRONTIUM).

For instance, the OpenAI reported that China’s Charcoal Typhoon used its services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.

Another example is Iran’s Crimson Sandstorm, which used LLMs to generate code snippets related to app and web development, generate content likely for spear-phishing campaigns, and for assistance in developing code to evade detection.

In addition, Forest Blizzard, the Russian nation-state group, is said to have used OpenAI services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.

OpenAI said Wednesday that it had terminated the identified OpenAI accounts associated with state-sponsored hacker actors. These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks, the AI firm said.

“Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships. Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely,” reads the new AI security report released by Microsoft on Wednesday in partnership with OpenAI.

Thankfully, no significant or novel attacks making use of the LLM technology have been detected yet, according to the company. “Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI,” Microsoft noted in its report.

To respond to the threat, Microsoft has announced a set of principles shaping its policy and actions to combat abuse of its AI services by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates.

“These principles include identification and action against malicious threat actors’ use notification to other AI service providers, collaboration with other stakeholders, and transparency,” the Redmond giant said.

Kavita Iyer
Kavita Iyer
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!


Please enter your comment!
Please enter your name here

Read More

Suggested Post