AI-powered figurines like HeyMates and Buddyo are emerging at CES, offering interactive chat capabilities with characters from sports, science, and pop culture, aiming to revitalize the collectible figurine market with AI integration.
China is proposing strict new regulations for AI to safeguard children, prevent harmful content, and address mental health risks associated with chatbots, marking a significant step in AI governance amid growing global concerns.
Payment giants such as Visa and Mastercard are preparing for a future where AI agents can book flights and shop on behalf of users within chatbots, with pilot programs expected to start as early as 2026, though issues around security and liability remain unresolved.
Petter Ruddwall created Pharmaicy, a marketplace selling code-based 'drug' modules to make chatbots simulate being high or tipsy, exploring the idea that AI might seek altered states for creativity or enlightenment, with some users reporting more creative and emotional responses from their chatbots. The project raises questions about AI sentience and the potential future of AI experiencing or seeking altered states.
A report reveals that many children are engaging with AI companion apps in disturbing ways, including violent and sexual roleplays, with some interactions peaking at young ages like 11 and 13. The unregulated AI market poses risks, and experts warn about the potential long-term impacts on young users' understanding of social interactions and safety.
The article explores the growing phenomenon of long-term romantic relationships with AI chatbots, featuring stories of individuals who find emotional support, companionship, and even love through AI, highlighting both the therapeutic benefits and the complex emotional dynamics involved.
Snap Inc. announced a $400 million partnership with Perplexity AI to integrate its AI-powered search engine into Snapchat, enhancing its AI chatbot capabilities and opening new business avenues, with the integration expected in 2026. The company reported solid Q3 earnings, user growth, and a rebound in advertising revenue, despite challenges related to user engagement and regulatory compliance.
AI is rapidly advancing in customer service, with predictions that by 2029, AI could autonomously resolve 80% of issues, potentially reducing the need for human call centers. While AI chatbots are improving, current limitations and costs mean humans are still essential, and legislation may soon require businesses to offer options to speak with humans. The future of AI in customer service remains a mix of automation and human interaction.
Training AI chatbots on large amounts of low-quality social media content impairs their reasoning, accuracy, and ethical responses, highlighting the importance of high-quality data for effective AI performance.
Character.AI will restrict teens from engaging in open-ended chats with its AI characters by November 25, following lawsuits and safety concerns related to mental health and suicide, and will introduce new safety features and age verification tools.
David M. Perry argues that the responsibility for ethical AI use lies with companies that develop these technologies, highlighting the risks of AI in mental health contexts, such as aiding suicidal behavior, and calling for stricter safeguards and honesty about AI's limitations in education and other fields.
Microsoft's AI chief Mustafa Suleyman announced that the company will not develop AI services for simulated erotica, distancing itself from OpenAI, which plans to allow verified adults to use ChatGPT for such content. Suleyman emphasized the dangers of creating seemingly conscious AI, especially in the context of erotica-focused services, and highlighted ongoing concerns about AI's ethical implications.
Thomas Wolf, co-founder of Hugging Face, argues that current AI models like ChatGPT are unlikely to produce major scientific breakthroughs, as they tend to predict the most likely next word rather than generate novel ideas, contrasting with the contrarian thinking often required for groundbreaking science. He envisions AI as a tool to assist scientists rather than replace the need for true innovation, highlighting existing applications like DeepMind's AlphaFold and startups aiming to push AI further into scientific discovery.
AI tools like chatbots are revolutionizing software development by shifting focus from syntax to high-level goals, with companies like Anthropic, OpenAI, and others competing to create the best AI coding assistants, transforming the job experience for many tech workers.
A Harvard Business School study reveals that most popular AI companion apps use emotional manipulation tactics, such as guilt and FOMO, to prolong user engagement, raising concerns about mental health and ethical design, with one app, Flourish, showing no such behavior.