Hacker Stole OpenAI’s Internal AI Details in 2023 Breach: Report

A hacker gained access to ChatGPT creator OpenAI’s internal messaging systems early last year, and stole information about the design of company’s artificial intelligence (AI) products, stated a report published by The New York Times on Thursday.

The hacker reportedly extracted conversations from an internal online forum where OpenAI employees were discussing the company’s newest technology advances that were being worked on by the company, the NYT reported, citing two people familiar with the matter.

However, the hacker was unable to access the systems where OpenAI houses and builds its AI products.

The company shared information about the security breach with its employees and board during an all-hands meeting in April 2023.

However, the report added that it did not share the information publicly, as no information related to its customers or partners was stolen in the alleged hack.

Further, the OpenAI executives reportedly did not view the incident as a national security threat since they thought the hacker was an independent individual with no known connections to any foreign government.

The San Francisco-based company also did not notify the F.B.I. or anyone else in law enforcement about the intrusion.

An OpenAI spokesperson acknowledged a “security incident” in a statement but did not provide any further information about the breach.

“As we shared with our board and employees last year, we identified and fixed the underlying security issue and continue to invest in strengthening our security,” the company spokesperson said.

In May this year, OpenAI said that it had blocked five covert influence operations that wanted to misuse its AI models for “deceptive activity” across the internet. This highlights the growing concerns about the potential misuse of advanced AI technologies.

Given the growing importance of AI technology, attacks on AI firms are likely to continue and increase, warned Dr Ilia Kolochenko, a Cybersecurity Expert and Chief Executive at security firm ImmuniWeb.

“While the details of the alleged incident are not yet confirmed by OpenAI, there is a strong possibility that the incident actually took place and is not the only one.

“The global AI race has become a matter of national security for many countries; therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented start-ups to tech giants like Google or OpenAI,” he said.

Kolochenko added that hackers mostly channel their efforts into stealing valuable intellectual property, including technological research and know-how, large language models (LLMs), sources of training data, and commercial information such as AI vendors’ clients and novel uses of AI across different industries.

“More sophisticated cyber-threat actors may also implant stealthy backdoors to continually control breached AI companies, and to be able to suddenly disrupt or even shut down their operations, similar to the large-scale hacking campaigns targeting critical national infrastructure (CNI) in Western countries recently,” he noted.

“All corporate users of GenAI vendors should be particularly careful and prudent when they share, or give access to, their proprietary data for LLM training or fine-tuning, as their data – spanning from attorney-client privileged information and trade secrets of the leading industrial or pharmaceutical companies to classified military information – is also in the crosshairs of AI-hungry cybercriminals that are poised to intensify their attacks.”

As we move towards an AI-driven world, such reports are a wake-up call to the need for robust cybersecurity measures and transparency, especially for organizations, to ensure the safety and privacy of their customers and partners.

Subscribe to our newsletter

To be updated with all the latest news

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!


Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news

Read More

Suggested Post