AI experts debate the potential for human extinction.

1 min read
Source: VentureBeat
AI experts debate the potential for human extinction.
Photo: VentureBeat
TL;DR Summary

Top AI researchers are pushing back on the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). They argue that this focus on existential risk is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications, and cybersecurity. Many say that the bombastic views around existential risk may be “more sexy,” but it hurts researchers’ ability to deal with things like hallucinations, factual grounding, training models to update, making models serve other parts of the world, and access to compute.

Share this article

Reading Insights

Total Reads

0

Unique Readers

1

Time Saved

8 min

vs 9 min read

Condensed

94%

1,65694 words

Want the full story? Read the original article

Read on VentureBeat