OpenAI's novel approach to combat A.I. 'hallucinations'

1 min read
Source: CNBC
OpenAI's novel approach to combat A.I. 'hallucinations'
Photo: CNBC
TL;DR Summary

OpenAI is developing a new method for training AI models to combat AI "hallucinations," which occur when models fabricate information entirely. The new approach, called "process supervision," trains AI models to reward themselves for each individual, correct step of reasoning when arriving at an answer, instead of just rewarding a correct final conclusion. This could lead to better explainable AI and help address concerns about misinformation and incorrect results. OpenAI has released an accompanying dataset of 800,000 human labels it used to train the model mentioned in the research paper. However, some experts remain skeptical and call for more transparency and accountability in the field of AI.

Share this article

Reading Insights

Total Reads

0

Unique Readers

1

Time Saved

4 min

vs 5 min read

Condensed

87%

824107 words

Want the full story? Read the original article

Read on CNBC