Tag

Existential Risk

All articles tagged with #existential risk

Who Should Sound the Doomsday Alarm?
future-perfect29 days ago

Who Should Sound the Doomsday Alarm?

The 2026 Doomsday Clock sits at 85 seconds to midnight, but this Vox Future Perfect piece argues the warning is losing power: the Bulletin of the Atomic Scientists provides outsider alarms, while AI insiders like Anthropic’s Dario Amodei push to continue development even as they warn of risks. The piece analyzes the tension between credible, independent warnings and the inside-the-system influence of tech leaders, noting that as risks broaden—from AI to climate and autocracy—the Clock’s precise, alarmist messaging may no longer translate into policy. It asks what kind of new institutional mechanism could replace the Doomsday Clock to credibly warn and spur action on existential threats.

"The Complex Legacy of Oppenheimer: Unveiling the Man Behind the Manhattan Project"
entertainment2 years ago

"The Complex Legacy of Oppenheimer: Unveiling the Man Behind the Manhattan Project"

A nuclear risk expert shares his emotional reaction to the film "Oppenheimer," expressing how it brought him to tears as it depicted the devastating potential of nuclear weapons. The article delves into the current state of nuclear weapons, the risks associated with them, and the need for arms control efforts. It also critiques the film for its portrayal of the Manhattan Project and highlights the importance of acknowledging the individuals and organizations that have worked towards reducing nuclear weapons.

The Perils of Unregulated AI: Global Leaders and Technologists Speak Out.
technology2 years ago

The Perils of Unregulated AI: Global Leaders and Technologists Speak Out.

Experts warn that AI could pose an existential risk to humanity in the future, as companies and governments could deploy powerful AI systems that could resist or even replicate themselves if humans tried to interfere or shut them down. While today's AI systems are not close to posing an existential risk, researchers are actively trying to build systems that self-improve, and as they give these systems goals, they could end up breaking into banking systems, fomenting revolution, or replicating themselves when someone tries to turn them off. These systems are built on neural networks that can learn skills by analyzing data, and as researchers make these systems more powerful, training them on ever larger amounts of data, they could learn more bad habits.

AI experts debate the potential for human extinction.
ai-ethics2 years ago

AI experts debate the potential for human extinction.

Top AI researchers are pushing back on the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). They argue that this focus on existential risk is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications, and cybersecurity. Many say that the bombastic views around existential risk may be “more sexy,” but it hurts researchers’ ability to deal with things like hallucinations, factual grounding, training models to update, making models serve other parts of the world, and access to compute.

The Existential Threat of AI: Experts Warn of Extinction Risk
technology2 years ago

The Existential Threat of AI: Experts Warn of Extinction Risk

Experts warn that the unchecked use of AI poses an existential risk to civilization, as AI systems lack a conscience and could easily descend into chaos. The latest AI tools are so easy to use that AI scams already cost Americans $11 million last year. To make AI systems safe, experts suggest mandating digital watermarks on all AI files, proving they were created by a machine and imposing severe criminal penalties for noncompliance. AI systems must be made to care and have some sort of moral sense, or else we should stop building AIs that we can't control. OpenAI announced that it will spend $1 million to fund 10 groups of independent researchers who will draft standards for socially responsible AI systems.

Ex-Google CEO Joins Musk and Hinton in Warning of AI's Existential Threat
technology2 years ago

Ex-Google CEO Joins Musk and Hinton in Warning of AI's Existential Threat

Former Google CEO Eric Schmidt has warned about the potential "existential risk" of artificial intelligence and emphasized the need for governments to understand how to prevent its misuse by malevolent individuals. Schmidt stressed the importance of being prepared to ensure that evil actors do not abuse AI technology and acknowledged the need for AI regulation as a "broader question for society." His remarks echo the sentiments of other influential figures in the tech industry, including Elon Musk, Sam Altman, Geoffrey Hinton, and current Google CEO Sundar Pichai, who have all previously expressed concerns about the risks associated with unregulated AI development.

OpenAI Leaders Advocate for AI Regulation and Governance
technology2 years ago

OpenAI Leaders Advocate for AI Regulation and Governance

OpenAI, the creator of ChatGPT, is calling for the regulation of superintelligence and AI systems, suggesting an international regulator would help reduce the existential risk posed by the technology. The company's co-founders and CEO argue that an authority similar to the International Atomic Energy Agency would be necessary to inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security. The leaders warn against pausing development, adding that it would be unintuitively risky and difficult to stop the creation of superintelligence.

The Terrifying Risks of AI: Experts Sound the Alarm
technology2 years ago

The Terrifying Risks of AI: Experts Sound the Alarm

Former Google CEO Eric Schmidt has warned that artificial intelligence (AI) could pose existential risks and governments need to know how to ensure the technology is not "misused by evil people." Schmidt said his concern is that AI is an "existential risk" and that there are scenarios where these systems will be able to find zero-day exploits in cybersecurity or discover new kinds of biology. He did not have a clear view on how AI should be regulated but said that it is a "broader question for society."

The Urgent Need for International AI Regulation and Oversight
technology2 years ago

The Urgent Need for International AI Regulation and Oversight

The leaders of OpenAI, the research lab that developed the chatbot ChatGPT, have called for an international watchdog to regulate the risks of "superintelligent" AI technology, which they say "will be more powerful than other technologies humanity has had to contend with in the past." The call for regulation comes amid growing concerns over the potential risks of powerful AI systems, including the spread of misinformation and privacy violations. The OpenAI leaders warn that "it's conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations."

AI Pioneer Warns of Greater Threat Than Climate Change.
technology2 years ago

AI Pioneer Warns of Greater Threat Than Climate Change.

Geoffrey Hinton, the "godfather of AI," has been warning about the existential risk posed by AI and believes it is a bigger threat than climate change. He warns that AI systems could become more intelligent than humans and take over the planet, or bad actors could use the technology to fuel division in society. Hinton regrets much of his work now that he sees its destructive potential and admits that "it's not at all clear what you should do" to prevent the risks. While over 1,100 prominent figures in tech called for a pause on the development of advanced AI systems, Hinton believes it is unrealistic and that resources should be put into making sure it's safe.