AI experts debate the potential for human extinction.
Originally Published 2 years ago — by VentureBeat

Top AI researchers are pushing back on the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). They argue that this focus on existential risk is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications, and cybersecurity. Many say that the bombastic views around existential risk may be “more sexy,” but it hurts researchers’ ability to deal with things like hallucinations, factual grounding, training models to update, making models serve other parts of the world, and access to compute.