Samsung Employees Accidentally Leak Sensitive Company Data To ChatGPT

Just three weeks earlier, Samsung had lifted a ban on employees using OpenAI’s ChatGPT over data privacy concerns.

However, ever since the ban has been lifted, there have been three separate instances of Samsung employees accidentally leaking sensitive information related to Samsung’s semiconductor division to ChatGPT in a span of 20 days, states a report from The Economist Korea (as spotted by Mashable).

The ban was originally intended to protect company data but was lifted on March 11, 2023, so that engineers at the semiconductor division could use ChatGPT to fix problems with the source code, improve productivity as well as be aware of the latest technological changes.

While permitting the use of ChatGPT, the South Korean giant also issued a notice to the semiconductor division to “be careful about internal information security and do not enter private information” in the popular artificial intelligence (AI) chatbot.

In the first instance, an employee from Semiconductor and Device Solutions faced an issue while executing a semiconductor equipment measurement database (DB) download software.

Since it appeared like a software source code issue, the employee copied and pasted the buggy confidential company source code into the AI tool. It then asked the chatbot to check for the errors and find a solution, actually making the source code part of ChatGPT’s internal training data.

In the second instance, an employee when faced with difficulty in understanding the device yield and other information shared the code with the AI chatbot and requested “code optimization” from ChatGPT.

While in the third instance, an employee shared a recording of a confidential company meeting and asked the ChatGPT to convert it into notes for a presentation. The information shared with ChatGPT due to these incidents is now part of it forever and cannot be deleted.

As a result, Samsung took immediate measures and has now limited the length of each employee’s ChatGPT prompts to a kilobyte or 1024 characters of text. The company is also said to be investigating the three employees who were part of the leak, as well as considering re-instating the ban.

In order to prevent such embarrassing mishaps in the future, Samsung is now developing its own AI chatbot, which is meant to be strictly used by its employees for internal purposes.

“If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network,” reads an internal memo sent by Samsung advising its employees to exercise caution while using the AI tool.

“As soon as content is entered into ChatGPT, data is transmitted and stored to an external server, making it impossible for the company to retrieve it.”

The ChatGPT’s user guide explicitly warns users not to share sensitive information in their conversations, as it is “not able to delete specific prompts from your history.”

It is important to note that all questions and text shared with ChatGPT are retained in order to train and improve its AI models unless users explicitly choose to opt out. In other words, it is difficult or impossible to erase the data completely from the system.

“As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements…Your conversations may be reviewed by our AI trainers to improve our systems,” OpenAI clearly explains on its website.

Notably, OpenAI claims, “We remove any personally identifiable information from data we intend to use to improve model performance. We also only use a small sampling of data per customer for our efforts to improve model performance.”

Interestingly, ChatGPT has been under a lot of scrutiny of late over its data collection policies in Europe with Italy temporarily banning the AI chatbot in the country over privacy concerns. Germany too could follow Italy’s recent ChatGPT ban and might take a similar action by blocking the AI chatbot in their country.

Subscribe to our newsletter

To be updated with all the latest news

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news

Read More

Suggested Post