Author Tom Ough explores potential end-of-world scenarios, emphasizing man-made risks like nuclear war, engineered viruses, and AI, while highlighting humanity's fragile progress and the importance of preserving knowledge to prevent civilization collapse.
US Vice President Kamala Harris called for a broader definition of AI safety, urging the international community to address not only far-off existential threats but also existing and near-term risks of the technology. She highlighted examples such as deepfake abuse, bias leading to wrongful imprisonment, and the spread of AI-enabled myths and disinformation. Harris announced the launch of the US AI Safety Institute, the participation of 31 countries in the US State Department's declaration on responsible military use of AI, and over $200 million in AI-safety funding secured by the White House. The US also signed the Bletchley Park Declaration on AI, focusing on "frontier AI."
A new book by scientist and writer John Hands argues that most existential threats to humankind have a low or negligible probability of coming true, and that there are many reasons to be optimistic about the future. Hands suggests that altruism, creativity, and a convergence of ideas have helped to foster human cooperation, allowing us to evolve from being in tribes to creating global organisations like the United Nations. He also offers cautious optimism about us confronting environmental challenges, and argues that the more we think about these things, the more action we can take.