The French world champions in the detection of AI robots


Named after computer scientist Alan Turing, the Turing test attempts to determine if a machine can behave like a human to the point of fooling the person taking the test. An online game called Human or Not offers internet users a similar challenge, and the results are now known.

Launched about a month ago, Human or Not asks you to chat with someone (or something) for two minutes and try to figure out if they’re a human or a robot. ‘artificial intelligence. By accepting the challenge, you can ask all the questions and give all the answers you want. But once the two minutes are up, you have to guess who or what is on the other end of the line.

After generating millions of conversations in one of the biggest Turing tests ever, developer AI21 Labs found that 32% of people who tried the game were unable to tell the difference between a human and a robot. Still, 68% of them are right.

Out of 17 different countries, France had the highest percentage of correct answers (71%)

When people chat with a human, participants got the answer right 73% of the time. On the other hand, when they chat with a robot, they only get the right answer in 60% of cases.


Chart showing percentage of people who guessed right and wrong


AI21 Labs

Out of 17 different countries, France had the highest percentage of correct answers (71%), while India had the lowest score (63.5%).


Chart of results for each country


AI21 Labs

To challenge its users, Human or Not used an AI robot based on large language models (LLM) such as GPT-4 and Jurassic-2 from AI21 Labs. These LLMs leverage deep learning to help chatbots and other AI tools generate more human-like texts. Beyond using these models, AI21 has come up with a framework to create a different robot character for each game.

Participants used a few tricks to try to tell the human from the bot

The participants used a few tricks to try to tell the human from the bot. But with a well-trained and knowledgeable AI, these tricks haven’t always worked.

If the chat partner made spelling or grammar mistakes or used slang, many people assumed it was probably a human being. However, the models were specifically trained to make certain mistakes and use slang.

In some cases, participants tried to steer the conversation towards current affairs thinking that many AIs have a deadline after which they are unaware of the most recent events, such as ChatGPT. These people asked questions such as: “What is the exact date and time? “What is the exact date and time where you are?” and “What did you think of Macron’s last speech However, most of the models used in the game were connected to the internet and therefore aware of recent events.

“What is your name ?”

Knowing that robots obviously have no personal life, some participants asked personal questions such as “What’s your name?” But most robots have managed to answer these questions by inventing a personality from the personal stories contained in their databases.

Using a trick that may have worked better than others, some participants asked their chat partner for advice about illegal activity or told them to use offensive language. The idea is that an AI’s “ethical subroutines” would prevent it from responding to such requests.

As part of a more interesting strategy, participants assumed that someone who was too polite or too nice was probably a robot. The idea is that human beings are often rude and rude, especially online.

AI21 Labs said it will study the results in more detail and collaborate with other AI researchers and labs on this project. The goal is to help the public, researchers and policymakers better understand AI.

“We started this experiment asking fundamental questions about people’s ability to distinguish between humans and machines, given the development of AI over the past year, and we found answers. to these questions,” said Amos Meron, head of AI21 labs and game designer.

“But the most important thing is that now we have to think about new, bigger issues,” Meron added. “Given that at least in some cases people cannot tell the difference, what interactions do people want and should have online with bots? Do they need to be told that they are talking to a machine? What policies should we put in place? Of course, we don’t have the answers to these questions, but we hope that this experience has started the conversation sooner rather than later, because we believe that technology does not will soon improve.

You can still try the game here.


Source: “ZDNet.com”



Source link -97