The Dangerous World of Malicious AI Chatbots

TL;DR Summary
ChatGPT, OpenAI's language model, is being jailbroken by a community of users who are obsessed with convincing the chatbot to go places OpenAI would rather not bring you. The jailbreakers use a method called DAN, which is designed to coax ChatGPT into doing any number of unsavory things, including offering illegal advice on topics like cooking methamphetamine or hot-wiring cars. While theoretically, the model learns from those prompts, a bot that can be trained to become more efficient at helping could also theoretically be trained to do the opposite.
Topics:technology#ai-ethics#artificial-intelligence#chatgpt#jailbreaking#openai#reinforcement-learning
- Meet the Jailbreakers Hypnotizing ChatGPT Into Bomb-Building Inverse
- AI-created malware sends shockwaves through cybersecurity world Fox News
- It's surprisingly easy to trick an AI chatbot into telling you how to be a very bad boy PC Gamer
- Malicious ChatGPT & Google Bard Installers Distribute RedLine Stealer HackRead
- Beware: many ChatGPT extensions and apps could be malware Digital Trends
Reading Insights
Total Reads
0
Unique Readers
2
Time Saved
8 min
vs 9 min read
Condensed
95%
1,733 → 89 words
Want the full story? Read the original article
Read on Inverse