Oxford study flags dangerous gaps in AI health guidance from chatbots

TL;DR Summary
A University of Oxford study found that AI chatbots deliver a mix of accurate and inaccurate medical information, making it hard for users to identify trustworthy guidance and potentially leading to unsafe health decisions about when to seek a GP or emergency care. Experts call for safer health-focused AI versions, clearer guidelines, and regulatory guardrails to reduce misdiagnosis and confusion.
Topics:business#ai-safety#chatbots#healthcare-technology#medical-advice#technology#university-of-oxford
- AI chatbots pose 'dangerous' risk when giving medical advice, study suggests Yahoo News Canada
- Reliability of LLMs as medical assistants for the general public: a randomized preregistered study Nature
- A.I. Is Making Doctors Answer a Question: What Are They Really Good For? The New York Times
- AI Chatbots Give Bad Health Advice, Research Finds Barron's
- AI-powered apps and bots are barging into medicine. Doctors have questions. Reuters
Reading Insights
Total Reads
0
Unique Readers
3
Time Saved
34 min
vs 35 min read
Condensed
99%
6,931 → 60 words
Want the full story? Read the original article
Read on Yahoo News Canada