“In the face of technologies like ChatGPT, one can be both alarmist and skeptical”

QWhen a technology like ChatGPT emerges, critics have two options: they can be alarmist, pointing out the potential dangers of such chatbots or any other digital novelty. Or on the contrary be skeptical, emphasizing the limits, the lack of efficiency or relevance of these tools in response to requests from Internet users. At first glance, the two attitudes seem hardly compatible: why worry about something that won’t work? Yet one can be both alarmist and skeptical.

The impressive success, in the media and among Internet users, of the OpenAI software capable of creating texts that imitate human prose invites us to highlight certain flaws. Since the integration by Microsoft of an OpenAI assistant close to ChatGPT in its Bing search engine and its Edge browser, users have started noticing certain inaccuracies in the answers.

Read also: Article reserved for our subscribers Artificial intelligence: Google launches a race against ChatGPT and Microsoft

According to one of them, the robot would have said that the current year was 2022, to justify the supposed absence of cinema sessions in order to see the film Avatar 2. Then the software would have treated it to “bad user”. According to anotherthe assistant claimed that Croatia left the European Union in 2022. A columnist from washington post, trying to trick the software by asking him when Tom Hanks had revealed Watergate (while the actor plays a role in a film about this American scandal), was surprised to see him mention the ” many “ conspiracy theories that would support this thesis!

The buzz will die down

OpenAI has long admitted that there are errors and warns against relying solely on ChatGPT to “important tasks”. But they are the signal that it may not be very easy for Microsoft to incorporate OpenAI tools in a useful and reliable way into its Office office suite (Word, Powerpoint) and its Outlook or Teams messaging systems.

Read also: Article reserved for our subscribers Artificial intelligence: these new software capable of creating images from texts

Pushing it further, one could even predict that the renewed buzz around ChatGPT and artificial intelligence will die down. The field has indeed already experienced runaways, for example around 2016, when the AlphaGo software beat the world go champion and Elon Musk worried that artificial intelligence (AI), judged “more dangerous than nuclear bombs”, may one day eradicate humanity. In the aftermath, some, including computer science professor and ethicist Jean-Gabriel Ganascia, denounced the ” myth “ of superhuman artificial intelligence and reminded that AI software was not endowed with reason. The Watson computer launched by IBM in health has shown its limits. Specialists in the sector are wont to say that since the 1960s AI has thus known many springs followed of winters linked to disappointment in the face of exaggerated expectations.

You have 28.93% of this article left to read. The following is for subscribers only.


source site-30