OpenAI found a way to avoid AI hallucinations


Although AI models are getting better and better, they are still capable of making mistakes and producing incorrect answers. This is what experts in this field call hallucinations.

All major AI chatbots, including ChatGPT and Google Bard, are prone to these hallucinations. OpenAI and Google even indicate that their chatbots can produce incorrect information.

“ChatGPT sometimes writes plausible but incorrect or nonsensical answers,” says OpenAI in a ChatGPT blog post.

Solve complex math problems with “process supervision”

In a new post about its research, OpenAI says it may have found a way to make AI models act more logically and avoid hallucinations.

OpenAI has trained a model capable of solving complex mathematical problems through “process supervision,” a method that provides feedback for each individual step, as opposed to “outcome supervision,” which provides feedback for each step. information on the final result.

In the research paper, OpenAI tested both methods using the MATH dataset and found that the process monitoring method led to “significantly better performance.”


ChatGPT results


Screenshot by Sabrina Ortiz/ZDNET

“Process supervision is also more likely to produce interpretable reasoning because it encourages the model to follow a human-approved process,” OpenAI explains in its research paper.

OpenAI notes that outside of math problems, it’s unclear to what extent these findings will apply to other areas.


Source: “ZDNet.com”



Source link -97