OpenAI's novel approach to combat A.I. 'hallucinations'

OpenAI is developing a new method for training AI models to combat AI "hallucinations," which occur when models fabricate information entirely. The new approach, called "process supervision," trains AI models to reward themselves for each individual, correct step of reasoning when arriving at an answer, instead of just rewarding a correct final conclusion. This could lead to better explainable AI and help address concerns about misinformation and incorrect results. OpenAI has released an accompanying dataset of 800,000 human labels it used to train the model mentioned in the research paper. However, some experts remain skeptical and call for more transparency and accountability in the field of AI.
Reading Insights
0
1
4 min
vs 5 min read
87%
824 → 107 words
Want the full story? Read the original article
Read on CNBC