Google has quietly updated its privacy policy to mention that allows it to use publicly available information to train AI models and enhance their capabilities.
โGoogle uses information to improve our services and to develop new products, features, and technologies that benefit our users and the public. For example, we use publicly available information to help train Googleโs AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities,โ states the new policy updated on July 1, 2023.
Prior to the updation of the policy, the wording for this section stated that the company collected public data for business purposes, research, and development, and to improve its Google Translate service.
However, the revised privacy policy, effective from July 1, 2023, has now been expanded to include AI models, as well as the use of publicly available information to train Bard and Cloud AI products.
โFor example, we may collect information thatโs publicly available online or from other public sources to help train Googleโsย AIย models andย buildย productsย andย features like Google Translate,ย Bard, and Cloud AI capabilities.ย Or, if your businessโs information appears on a website, we may index and display it on Google services,โ it added.
Christa Muldoon, a Google spokesperson, in a statement to The Verge, clarified, โOur privacy policy has long been transparent that Google uses publicly available information from the open web to train language models for services like Google Translate.
She added, โThis latest update simply clarifies that newer services like Bard are also included. We incorporate privacy principles and safeguards into the development of our AI technologies, in line with our AI Principles.โ
With the new policy, Google is making it clear that it is planning to scrape the internet for any content that can be used to train and improve the current and future AI models it develops.
As AI becomes more integrated, it will require collaborative efforts from both technology companies and individuals to strike a delicate balance between AI advancements and safeguarding privacy to guarantee a responsible future with AI.