ChatGPT Under Siege: AI Malware and Fake Ads Pose Threats

1 min read
Source: WIRED
ChatGPT Under Siege: AI Malware and Fake Ads Pose Threats
Photo: WIRED
TL;DR Summary

Hackers are finding new ways to jailbreak OpenAI's language model, ChatGPT, by using multiple characters, complex backstories, and translating text from one language to another. Prompt injections can also be used to plant malicious instructions on a webpage, which can be followed by Bing Chat or other language models. As generative AI systems become more powerful, the risks of jailbreaks and prompt injections increase, posing a security threat. Companies like Google are addressing these risks by using reinforcement learning and fine-tuning on curated datasets to make their models more effective against attacks.

Share this article

Reading Insights

Total Reads

0

Unique Readers

1

Time Saved

2 min

vs 3 min read

Condensed

84%

57592 words

Want the full story? Read the original article

Read on WIRED