Google cautions employees against using Bard's generated code and AI chatbots.

TL;DR Summary
Google has warned its employees not to use the code generated by its AI chatbot, Bard, due to privacy and security risks. Nuance, a voice recognition software developer acquired by Microsoft, has been accused of recording and using people's voices without permission in an amended lawsuit filed last week. Google's DeepMind AI lab does not want the US government to set up an agency singularly focused on regulating AI. OpenAI reportedly cautioned Microsoft about releasing its GPT-4-powered Bing chatbot too quickly, considering it could generate false information and inappropriate language.
- Google warns its own employees: Do not use code generated by Bard The Register
- Google says no sharing of sensitive info with AI chatbots like Bard, here's why People Matters
- Google's AI chatbot is banned where I live — here's why that's a good thing Tom's Guide
- Even Google is warning its employees about AI chatbot use ZDNet
- Google urges caution with AI chatbots | WION Pulse WION
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
4 min
vs 5 min read
Condensed
90%
913 → 90 words
Want the full story? Read the original article
Read on The Register