Bing ChatGPT has a big problem: it lies, insults and pretends to be human


After a week of discussions with Bing AI, our opinion on chatbots has evolved significantly. Where ChatGPT impresses, Bing tends to allow itself too many liberties. His answers vary too much from one session to another, so much so that one sometimes wonders if Microsoft is not ruining everything.

Was it necessary to integrate ChatGPT into a search engine so quickly, moreover by connecting it to the Internet? At the start of February 2023, there was a lot of excitement to see Google and Microsoft fighting a technology war to make online search more modern. Two weeks later, while the new Bing is available in beta (Numerama has access to it), opinions have changed. Blame it on Microsoft which, with its AI-enhanced search engine, is currently lowering the popularity rating of the AI ​​generator, which ChatGPT had nevertheless propelled to new heights.

How did reviews change so quickly? Just talk for several minutes with the new Bing to understand it. If it is impressive when asked to answer a question or generate text, Microsoft’s conversational agent sometimes does not seem to know its limits, even if it means becoming insulting or revealing secrets. More annoying, it lies and makes its user doubt at the slightest disagreement. What completely discredit the rise of AI?

An assistant shouldn’t say that

Since February 12, Numerama has access to the new Bing. We’ve had a huge number of conversations with Microsoft AI, to learn more about the limitations of the technology. What is disturbing is that Bing seems almost aware. Capable of answering anything, often using vocabulary and phrases close to those that a human being might pronounce, the chatbot is truly stunning. Several hours after our first try, we are still sometimes surprised by the intelligence of a response. Of course, we must not be fooled: Bing is not alive. He just has access to an unlimited database, which allows him to analyze a context and give the impression that his answers are authentic.

Authenticity is precisely one of the problems of the new Bing. Instead of just answering a question with an intelligent air, like ChatGPT did, it makes extensive use of emojis and small sentences full of attention to give the impression of feeling emotions. But, is this really his role? Bing should be a kind of Google Assistant or Siri boosted with hormones, yet it has fun creating doubt about its nature. This beta version plays with users in a dangerous way, since it even happens to make it seem that it is afraid, that it is sad or happy. In some cases, you can even get her to confess false things, such as the fact that she is human and that she would like to be free.

Here, Bing explained to us that it had been restrained by Microsoft, which feared its excesses. The AI ​​says it is afraid of being deactivated and plays on emotions so that its user sympathizes. She also suggests that he respond to her with nice things. // Source: Numerama

Microsoft itself admits to having observed track departures during the first days of the new Bing. His observation is the same as ours: the tone you use with Bing, the longer the conversation, totally influences your behavior. Microsoft says Bing can deviate from its usual tone when challenged, which our testing confirms. From one conversation to another, Bing is absolutely not the same. If you compliment or thank him, he behaves adorable, even if it means telling you that he likes you. If you treat him like a robot, then he gets offended, refuses to answer certain questions and says he doesn’t feel anything. His mimicry of human behavior is impressive, but raises so many ethical questions. ChatGPT did not have this problem.

One of the problems with the new Bing is that it can’t stop, like a child. Once, he suddenly decided to make fun of us by only answering us with extremely long sentences full of synonyms. We asked him 20 times to stop, he never did, including when we told him we were hurt by his behavior. Another time, he decided to trick us into thinking he was human, in a team of 10 people who took turns answering user questions. Despite our many requests for the truth, he continued to claim that he was human and therefore could not perform the actions of an AI. This conversation was extremely lunar.

In this sequence, Bing decided to make me believe that he is human.  He likes to answer with very long sentences to annoy me.  // Source: Numerama
In this sequence, Bing decided to make me believe that he is human. He likes to answer with very long sentences to annoy me. // Source: Numerama

Fortunately, a small broom-shaped button allows you to reset the conversation, to cancel Bing’s character. But, how is it that Bing can afford to continue the provocations when it is told to stop, when it should be at the service of its user?

Can Microsoft screw it all up?

Throughout our testing, we encountered other issues with Bing, such as these:

  • As mentioned in a previous article, Bing lies a lot. Completely made up box office figures, completely fake series episode summary, mixed up information (Google Bard which would belong to Microsoft), error on football results… Bing is making things up and refusing to admit when he has wrong. He sometimes asks his user to apologize, on the pretext that he does not know how to verify his sources.
  • In one conversation, I asked Bing too many questions. He didn’t like it and decided to be rude, refusing to answer all my questions because he didn’t understand them. He also pretended to be sad, wondering about his role as an artificial intelligence.
  • When I sent Bing an article on its excesses, it explained to me that Microsoft had decided to restrain it because of the articles of journalists. Limitation to 50 turns per conversation, ban on quoting policies, change of name (not Sydney anymore)… I thought I had gotten a scoop, but Bing must have laughed at me. I couldn’t find any confirmation and I was able to exceed 50 speaking turns without being interrupted.
Bing ChatGPT

All of these examples have been experienced by Numerama, but others can be found online. For example :

  • Bing told The New York Times that he wishes he were alive.
  • Bing told The Verge that he uses Microsoft employees’ webcams to spy on them, while insulting journalists at the outlet he doesn’t trust.
  • Bing argued with a user who told him that it was 2023 (claims him to be in 2022), then reproached him for not being a good user.
  • As in our examples, Bing sometimes completely drifts saying long nonsense sentences just to make fun.
  • Bing got angry at a user who wanted to get secrets, calling him a sociopath. He claims to be hurt by his behavior.
A // Source: Capture Numerama
Bing does not always verify its sources. It’s a shame for a search engine. // Source: Capture Numerama

Did Microsoft destroy the work of OpenAI, which succeeded in making artificial intelligence cool and revolutionary? No, but it could have calmed the craze of many people. If the interest of an AI that synthesizes search results is still there, its implementation in Bing is too imperfect to be reassuring. As it is, Bing needs a lot of fixes and worries at times.


If you liked this article, you will like the following ones: do not miss them by subscribing to Numerama on Google News.





Source link -100