this is why the AI ​​started going crazy in its responses


In recent days, ChatGPT responded to Internet users’ queries with sentences that made no sense, in English, French or Spanish. We know why now.

ChatGPT
Credits: 123RF

That’s it, ChatGPT no longer gives the impression of speaking to someone who has consumed illicit substances. For a few days, asking the AI ​​even a simple question could result in answers that made absolutely no sense. The majority of testimonies came from Anglo-Saxon countries, but not only. Spanish or French were not spared, when the chatbot did not decide to change language and alternate between Japanese or Korean between two sentences.

In some cases, the response took the form of a series of sentences meaning nothing and of which each word was highlighted in a different color. A surreal behavior thatOpenAIthe parent company of ChatGPT, hastened to analyze to remedy this.

The cause has been found and it lies in the way the AI ​​chooses its words. Generally speaking, it studies the request and its context before select the words that you think are most likely to form the expected response. There, she mixed everything up a bit.

ChatGPT gave answers without head or tail, we now know what happened

A user experience optimization introduced a bug in the way the model processes language. LLMs generate answers by randomly sampling words, partly based on probabilities. Their “language” consists of numbers that correspond to tokens [tokens en anglais, ndlr]. In this case, the bug resided in the step where the model chooses these numbers. Similar to a translation error, the model chose slightly wrong numbers, resulting in sequences of words that made no sense“, explains OpenAI.

Read also – ChatGPT updates its knowledge, but not for everyone

At the technical level, the firm specifies that “inference kernels produced incorrect results when used in certain GPU configurations“. Stilla fix has been deployed and the incident is considered resolved. However, it seems that chatbots have some pretty striking problems at the moment. We think, for example, of the historical inaccuracy of Gemini which pushed Google to pause image generation time to correct what is wrong.



Source link -101