
Understanding AI Hallucinations: Causes and Challenges
A new OpenAI study explores why large language models like GPT-5 hallucinate, attributing it partly to training focus on next-word prediction and current evaluation methods that incentivize guessing. The researchers suggest updating evaluation metrics to penalize confident errors and discourage guessing, aiming to reduce hallucinations.