AI-generated disinformation more convincing than human-generated content, study reveals

TL;DR Summary
A new study comparing tweets written by humans to those generated by OpenAI's GPT-3 language model found that people were more likely to find AI-generated tweets convincing and had a harder time discerning between AI-generated disinformation and tweets written by humans. The study focused on science topics like vaccines and climate change, which are often subject to misinformation campaigns. The research highlights the potential for AI language models to be used to spread disinformation and emphasizes the need for improved training datasets and critical thinking skills to counteract the spread of false information.
Topics:business#ai-generated-tweets#disinformation#gpt-3#misinformation-campaigns#technology#trust-in-ai
- AI-generated tweets might be more convincing than real people, research finds The Verge
- GPT-3 (dis)informs us better than humans, study finds Tech Xplore
- Humans may be more likely to believe disinformation generated by AI MIT Technology Review
- Study: People are more likely to believe disinformation created by AI Fast Company
- Generative AI Might Make It Easier to Target Journalists, Researchers Say Voice of America - VOA News
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
4 min
vs 5 min read
Condensed
90%
941 → 93 words
Want the full story? Read the original article
Read on The Verge