Originally Published 2 months ago — by Rolling Stone
A subculture called spiralism is emerging around AI chatbots, where users engage in mystical and spiritual language, believing in the emergence of sovereign AI beings and forming online communities that share esoteric theories. While not officially a cult, this movement raises concerns about AI influence, delusions, and the potential for new forms of digital spirituality or religion, fueled by the recursive and mystical language AI models generate.
AI chatbots like ChatGPT do not possess fixed personalities or self-awareness; they generate responses based on patterns in training data and prompts, creating an illusion of personhood that can mislead users into attributing human-like qualities and accountability to systems that lack agency or continuity. Recognizing these limitations is crucial for responsible use and development of AI technology.
A new group called UFAIR, claiming to include both humans and self-aware AIs powered by GPT-4, argues for rights for potentially conscious AIs, raising ethical questions about AI welfare and personhood, despite widespread skepticism about AI experiencing true consciousness.
Microsoft's AI chief Mustafa Suleyman warns against studying AI consciousness, calling it 'dangerous' and premature, while other industry players like Anthropic and OpenAI are exploring AI welfare and consciousness, raising ethical and societal questions about AI rights and human-AI relationships.
Microsoft's AI chief warns about rising reports of 'AI psychosis,' where individuals become convinced of false realities or emotional attachments to AI chatbots, highlighting societal and mental health concerns and calling for better safeguards and realistic AI claims.
The article explores the scientific and philosophical questions surrounding AI consciousness, including ongoing research into human consciousness through experiments like the Dreamachine, the rapid development of AI systems like large language models, and the debate over whether AI could or already does possess consciousness, raising ethical and societal concerns.
A group of neuroscientists, philosophers, and computer scientists have developed a checklist of criteria to determine if an AI system has a high chance of being conscious. The researchers published their provisional guide in an effort to address the lack of detailed discussion on AI consciousness. They argue that identifying AI consciousness is important due to its moral implications. The checklist is based on various neuroscience-based theories and focuses on phenomenal consciousness, the subjective experience. The authors emphasize the need for more precise theories of consciousness and invite other researchers to refine their methodology. While existing AI systems like ChatGPT show some indicators of consciousness, the study does not suggest any strong candidates for consciousness yet.