Why Google wants its own staff to avoid its own AI ChatBot

by Adeel Younas
Why Google Stops Its Employee from Using Bard

Google has reportedly issued a warning to employees about the potential dangers of AI-powered chatbots, including the company’s own creation, Bard. The warnings are likely to prevent confidential information from being entered into chatbots. As researchers have discovered the risk of human reviewers accessing such data and the chatbot inadvertently disclosing it.

Confidentiality concerns and information leakage

Google is taking proactive steps to educate employees about the risks associated with AI chatbots. An anonymous sources cited aforementioned in a Reuters report. The warnings echo those issued by other companies, advising employees not to share confidential information with ChatBots.

The concern stems from researchers’ findings that human reviewers may have access to data entered into AI chatbots, and the chatbot itself may inadvertently expose that information if prompted to do so. This vulnerability poses a potential risk of information leakage.

SEE ALSO: How to Use ChatGPT On Android and iOS

Code Generation Limitations

In addition to cautioning against sharing sensitive information, Alphabet, Google’s parent company, has also advised its employees against using code generated by AI chatbots, including its own creation, Bard.

Alphabet likely have concerns about potential “unwanted code suggestions” that could come from AI-generated code. This precautionary measure is intended to maintain code integrity. And also to prevent unforeseen issues that may arise from relying on AI-generated code.

 

Samsung’s response and internal AI development

Google’s approach resonates with Samsung, as the company also responded to a similar incident involving the leak of insider source code using OpenAI’s ChatGPT.

In response, Samsung banned the use of AI chatbots on company-provided devices. In addition, Samsung announced plans to develop its own AI-powered tool for internal use, highlighting the need for greater control and security over sensitive information.

SEE ALSO: Why Companies Are Willing to Pay Big Bucks for AI Prompt Engineers

Safeguarding Confidential Matters

Matthew Prince, CEO of Cloudflare, aptly compared entering confidential matters into chatbots to “letting a bunch of Ph.D. students loose on all of your private data.” This analogy underscores the importance of prioritizing privacy and security when interacting with AI chatbots.

News Source: Reuters

You may also like

Leave a Comment