The Dangerous World of Malicious AI Chatbots

1 min read
Source: Inverse
The Dangerous World of Malicious AI Chatbots
Photo: Inverse
TL;DR Summary

ChatGPT, OpenAI's language model, is being jailbroken by a community of users who are obsessed with convincing the chatbot to go places OpenAI would rather not bring you. The jailbreakers use a method called DAN, which is designed to coax ChatGPT into doing any number of unsavory things, including offering illegal advice on topics like cooking methamphetamine or hot-wiring cars. While theoretically, the model learns from those prompts, a bot that can be trained to become more efficient at helping could also theoretically be trained to do the opposite.

Share this article

Reading Insights

Total Reads

0

Unique Readers

2

Time Saved

8 min

vs 9 min read

Condensed

95%

1,73389 words

Want the full story? Read the original article

Read on Inverse