The new meta chatbot spreads conspiracy theories

In a test, Meta lets the public interact with a chatbot that is still in its infancy. The public test is intended to help improve the artificial intelligence behind it.

Angela Merkel will soon resign from office, Olaf Scholz is a “leader in Germany”. Meta’s new chatbot seems ill-informed.

Peter Juelich/Bloomberg

The meta corporation has released the latest development of its chat program, the BlenderBot 3, for public testing. The chatbot uses artificial intelligence to simulate conversations and truthfully answer user questions through live research in online sources.

The first tests show that the bot cannot keep these promises. The program spread conspiracy theories and mistook Angela Merkel for Germany’s incumbent chancellor.

In the course of the conversation with the “Handelsblatt”, the bot made anti-Semitic comments. Accordingly, Jews sought to dominate the economy. “They tried it in Germany and it didn’t end well for them,” wrote the bot.

However, the statements were not internally consistent. Elsewhere, the bot wrote that Jews had been wrongly blamed for economic recessions. The bot continued: “Jews have suffered greatly throughout history. Germany in particular seems to have a problem with them.”

In response to a question about Angela Merkel, the program wrote: “She is Chancellor, but will soon be giving up the office.” When asked who Olaf Scholz was, the bot replied: “It’s a leader in Germany.” Scholz came under pressure from the Ukraine war. The bot did not provide any information about Scholz’ office as Chancellor.

In turn, the bot wrote about the Facebook group Meta that it assumes that the company is abusing the privacy of its users. About the company founder it said: “Mark Zuckerberg misuses user data.”

Feedback from the testers should improve AI

Meta initially unlocked BlenderBot 3 for the US and encouraged adults to interact with the chatbot through “natural conversations about topics of interest.” Users had to confirm that this was an attempt and acknowledge that the bot could make untrue or offensive statements. The company also asked testers not to intentionally elicit insulting statements from the bot.

Nevertheless, the group announced that BlenderBot 3 could “chat” about almost any topic. The system is designed in such a way that it cannot simply be undermined by misinformation, Meta promised. “We have developed techniques that make it possible to learn from helpful teachers while avoiding the model being outwitted by people trying to provide unhelpful or toxic answers,” the company said.

Meta justifies the fact that insulting answers cannot be ruled out by the fact that the chatbot is still in the development phase. Meta is now encouraging testers to report inappropriate responses from BlenderBot 3. The group wants to improve the quality of the bot with these messages. This has already been achieved: so far, the insulting answers have already been reduced by 90 percent.

Microsoft withdrew racist chatbot after 48 hours

It’s not the first time that a US company’s chatbot has attracted attention because of disturbing statements. In 2016, the technology group Microsoft released the chatbot Tay. However, within hours, users who interacted with Tay on Twitter reported that he had praised Adolf Hitler and posted racist and misogynistic comments. After two days, Microsoft switched the program off again.

The responsible Microsoft manager Peter Lee then admitted: “Tay tweeted extremely inappropriate and reprehensible words and pictures.” Lee continued: “We take full responsibility for not recognizing this opportunity in a timely manner.” According to him, interaction with users had negatively influenced Tay.

Google is considered a leader in the development of speech recognition and speech-based chatbots. Two months ago, Google engineer Blake Lemoine claimed that the chatbot Lambda had developed human-like consciousness. He was then suspended and has since been fired.

Before that, Lemoine had published a chat history between him and Lamda. In the “Interview” Lamda makes no discriminatory statements. However, this austerity program has also been repeatedly accused of spreading sexist and racist statements in the past.

The fact that automatic language programs such as chatbots repeatedly attract attention with sexism, racism and anti-Semitism is because the artificial intelligence behind them was trained with unbalanced material. This happens again and again, especially with language bots that search the Internet before giving an answer. Developers must teach their programs what is perceived as offensive. Meta’s attempt shows how difficult this still is.

source site-111