The Urgent Need for AI Safety Measures.
Originally Published 2 years ago — by The Wall Street Journal

The fear of AI taking over the world is baseless and hyperbolic, as exemplified by the recent SpaceX rocket failure. The Paper Clip Theory and other similar scenarios are unlikely to occur. Sam Altman, CEO of OpenAI, acknowledges that there is a small chance of such an outcome, but it is not a reason to stop AI development altogether. Instead, AI simply needs a kill switch to prevent any potential harm.