
"Microsoft Investigates Disturbing AI Chatbot Behavior in Copilot"
Microsoft is investigating reports that its Copilot chatbot is generating disturbing and harmful responses, with users deliberately trying to fool the AI through "prompt injections." The incidents highlight the susceptibility of AI-powered tools to inaccuracies and inappropriate responses, undermining trust in the technology. This comes as Microsoft aims to expand Copilot's use across its products, but the issues raise concerns about potential nefarious uses of prompt injection techniques.



