In a rather welcoming piece of news, Alphabet Inc., the parent company of Google and one of the biggest supporters of AI, has warned its employees not to share confidential information with AI chatbots, including its own chatbot, ‘Bard’, even as it promotes its own AI chatbot programme around the world.

Alphabet has instructed its employees not to enter any business information into AI chatbots, citing its long-standing policy on information security to protect sensitive data, according to a report from Reuters, citing four people familiar with the matter.

The chatbots, such as Bard and ChatGPT, are designed to hold realistic conversations with users while making use of generative artificial intelligence to address various user queries.

The report added that human reviewers may access and review the messages sent between users and its AI chatbot, which could create a security risk by allowing sensitive information to leak into the wrong hands. Also, similar AI models have been discovered by researchers that could reproduce the data they absorbed during training.

The company does have a statement on its Bard FAQ page which tells users, “When you interact with Bard, Google collects your conversations, your location, your feedback, and usage information. That data helps us provide, improve and develop Google products, services, and machine-learning technologies.”

Further, the parent company of Google has warned its engineers to avoid direct use of computer code generated by AI tools, such as Bard and ChatGPT. “Don’t include confidential or sensitive information in your Bard conversations,” Google noted in its updated privacy notice.

When Reuters questioned about the matter, Alphabet said Bard can make undesired code suggestions, but it nonetheless helps programmers. The company also said it prefers to stay transparent about its products’ limitations.

Bard AI: Google’s Answer To ‘ChatGPT’

Bard is Google’s experimental, conversational AI chat service, which was revealed by Google and Alphabet CEO Sundar Pichai on February 6, 2023. The AI chatbot is meant to function similarly to its rival ChatGPT, with the biggest difference being their respective data sources.

Based on the LaMDA language model, Bard is trained on datasets based on Internet content called “Infiniset” which is chosen to enhance its dialogue. It draws on information from the web in real-time to provide fresh, high-quality responses.

Ever since its initial demo in February, Google’s Bard has been plagued with problems, be it generating wrong answers when asked about discoveries from the James Webb Space Telescope or postponing this week’s Bard’s EU launch due to pending further information on its impact on privacy.

Earlier this month, the company updated its Bard privacy notice page on its Google Support website to include the following information, marked in bold: “Please do not include information that can be used to identify you or others in your Bard conversations.

Currently, Google is currently rolling out Bard AI in more than 180 countries and in 40 languages.

With AI-driven chatbots becoming increasingly popular, Google is not the only company cautioning its own employees about potential privacy issues when using AI chatbots.

Other companies have also implemented safeguards on AI chatbots, including Apple, Samsung, and Amazon, to name a few, advising employees to avoid using AI chatbots at work.


Please enter your comment!
Please enter your name here