Stack Overflow bans responses from OpenAI’s ChatGPT chatbot


Stack Overflow, a site where developers can ask and answer questions about development and programming, has temporarily banned the use of text generated by ChatGPT, a chatbot launched by OpenAI last week.

ChatGPT is based on OpenAI’s GPT-3 language model. Users of this new service quickly discovered that although the chatbot answers questions in a “human-like” way, the answers it gives can have flaws.

Since its launch, the chatbot has been called upon in many ways, including writing new code and fixing coding errors, and it can ask for more context when asked by a human to solve coding problems, as OpenAI shows. in examples. But OpenAI also notes that ChatGPT sometimes writes “plausible-looking, but incorrect or nonsense answers.”

“Although the responses produced by ChatGPT have a high rate of errors, they generally appear to be good”

This seems to be a key cause of its impact on Stack Overflow and its users who are looking for correct answers to coding issues. Also, because ChatGPT generates answers very quickly, some users provide many answers generated by the program without analyzing them to check their accuracy.

“The main problem is that, although the responses produced by ChatGPT have a high rate of errors, they generally look good and are very easy to produce,” Stack Overflow moderators say in a post.

StackOverflow imposed this temporary ban because the answers created by ChatGPT are “substantially harmful”, both to the site and to users searching for correct answers.

The average cost of each answer is a few cents

“Overall, given that the average response rate correct by ChatGPT is too weak, the display of replies created by ChatGPT is substantially harmful to the site and users who request or seek answers correct. »

Sam Altman, head of OpenAI, announced on Twitter that since its launch last Wednesday, ChatGPT has more than one million users. He also told Twitter owner Elon Musk that the average cost of each response was in the penny range, but admitted that the app would eventually need to be monetized due to its “exorbitant” computational costs. “.

Stack Overflow adds that ChatGPT responses have “overwhelmed” its volunteer-based quality curation infrastructure, due to the number of low-quality responses pouring in.

“We need the volume of these messages to decrease”

So far, Stack Overflow has detected ChatGPT-generated messages in the “thousands”. The other problem is that many answers require detailed analysis by someone experienced in the field to determine if the answer is wrong.

Under this new policy, if a human user is caught posting ChatGPT-generated replies, Stack Overflow will impose “penalties” on the user, even if the message is acceptable.

“As such, we need the volume of these posts to decrease and we need to be able to process those posted quickly, which means processing users, rather than individual posts,” says Stack Overflow. “So at this time using ChatGPT to create messages here on Stack Overflow is not permitted. If a user is suspected of using ChatGPT after the posting of this temporary policy, penalties will be imposed to prevent users from continuing to post this type of content, even in places where the messages would otherwise have been acceptable. »

Source: ZDNet.com





Source link -97