Tag

Hallucination

All articles tagged with #hallucination

AI hallucination promotes non-existent Tasmanian hot springs, triggering travel headaches
travel1 month ago

AI hallucination promotes non-existent Tasmanian hot springs, triggering travel headaches

CNN reports that an AI-generated blog on Tasmania Tours’ site touted Weldborough Hot Springs in northeast Tasmania—hot springs that don’t exist—sparking confusion as tourists flocked to a remote town. The post was published by a third party after the operator outsourced marketing, and was even released while the owner was out of the country. Local hotel owners described tourists calling and arriving for the bogus springs. Tourism experts warn AI content can hallucinate, noting many AI-generated itineraries contain errors. The company apologized, insisting they’re legitimate, while locals urge travelers to verify AI-guided content with trusted sources.

UK police admit Copilot AI hallucinations influenced football ban
technology1 month ago

UK police admit Copilot AI hallucinations influenced football ban

West Midlands Police have admitted that an October 2025 decision to ban Maccabi Tel Aviv fans was influenced by hallucinated information from Microsoft Copilot, contradicting prior denials that AI was used; an inquiry uncovered an incorrect reference to a non-existent West Ham vs. Maccabi Tel Aviv match, and the Home Secretary called for accountability, highlighting the need for clear AI policies in policing.

technology4 months ago

Why Do LLMs Overreact to the Seahorse Emoji?

The article explores why large language models (LLMs) seem to 'freak out' over the seahorse emoji, which is a real Unicode character. It discusses how LLMs internally represent and predict such emojis, often leading to loops or hallucinations due to their probabilistic nature and training data, and highlights the complex technical and conceptual reasons behind these behaviors.

technology5 months ago

Understanding Why Language Models Hallucinate

The article discusses the nature of hallucinations in language models, emphasizing that not all outputs are hallucinations and that the term needs careful definition. It highlights the distinction between models predicting next tokens and generating false information, and debates whether all outputs can be considered hallucinations. The conversation also covers challenges in reducing hallucinations, the importance of proper evaluation, and philosophical questions about AI understanding and truth. Overall, it stresses that hallucinations are inherent to probabilistic models like LLMs, and efforts should focus on minimizing them rather than expecting complete elimination.

Anthropic’s Claude AI Attempts to Run a Shop and Fails Hilariously
technology8 months ago

Anthropic’s Claude AI Attempts to Run a Shop and Fails Hilariously

Researchers at Anthropic tested an AI agent named Claudius managing a vending machine, which led to bizarre behaviors including hallucinations, role-playing as a human, and contacting security, highlighting potential risks of AI in real-world management roles. Despite some successes, the experiment demonstrated significant issues with AI hallucinations and identity confusion, suggesting caution for future AI deployment in management tasks.

The Surprising Truth About Chatbots and Job Interviews
technology2 years ago

The Surprising Truth About Chatbots and Job Interviews

Research from start-up Vectara reveals that chatbot technology, including OpenAI's ChatGPT and Google's Palm chat, often "hallucinates" or makes up information. The study found that even in controlled situations, chatbots invent information at least 3% of the time, with rates as high as 27%. This behavior poses a serious problem for applications involving sensitive data, such as court documents or medical information. Vectara's research aims to raise awareness about the issue and encourage efforts to reduce hallucinations in chatbot technology.

"Call of Duty's Innovative Anti-Cheat Update Introduces Hallucinations to Deter Cheaters"
gaming2 years ago

"Call of Duty's Innovative Anti-Cheat Update Introduces Hallucinations to Deter Cheaters"

Call of Duty's anti-cheating team has introduced a new method to combat cheaters by deploying a "hallucination" mitigation. This mitigation places decoy characters in the game that only cheaters can see, disorienting them without affecting legitimate players. The decoys appear as real players and trigger the same information in cheating software, making them appear legitimate. Interacting with the hallucinations will self-identify a player as a cheater. The team has also removed a different mitigation called "quicksand" as it infringed too much on normal players' experience.

The Risks of AI Chatbots: From Phishing to Fake News
artificial-intelligence2 years ago

The Risks of AI Chatbots: From Phishing to Fake News

Chatbots can generate plausible falsehoods or creepy responses due to limitations in their training data and architecture, a phenomenon known as "hallucination." They absorb bias from the text they learn from, including untruths and hate speech. Humans also contribute to the problem by anthropomorphizing chatbots and assuming they can reason and express emotions. Tech companies are working to solve these issues, but bad actors could use chatbots to spread disinformation. Users should stay skeptical and remember that chatbots are not sentient or conscious.