ChatGPT or the mirror of worldly knowledge

Companies. With everyone having access to the ChatGPT system, the debate about the dangers of artificial intelligence (AI) has reached the mainstream. In many areas, AI has already surpassed human capabilities. But what disturbs in ChatGPT is no longer the power of reasoning or calculation, but its ability to mobilize a large mass of texts to answer, according to the common rules of discourse, to all the questions it is asked.

Observers, companies and teachers were then concerned to see the system develop professional summaries or academic assignments perceived as “satisfactory”. But isn’t it this satisfaction that ChatGPT requires us to question instead?

In its early days, AI aimed to capture technical and specialized knowledge. The expert systems of the 1980s provide medical diagnoses, help machine repairers or drive robots. The knowledge they capture is that of a reasoning mobilizing the facts and the rules of a trade. This approach experienced its most spectacular breakthrough with chess or go game software, which beat the greatest masters.

Read also: Article reserved for our subscribers “With ChatGPT and the emergence of artificial intelligence, the question of the scarcity of work and the future of pensions is relaunched”

The second stage of AI takes the opposite approach. Instead of starting from the knowledge of an expert, we will try to generate it by training an algorithm from gigantic databases. Facial recognition is emblematic of this approach. The AI ​​can then learn to imitate a literary or musical style and generate complex shapes from millions of examples.

A perfect rhetorician

But it didn’t take long for discerning ChatGPT users to realize that the system reasons and miscalculates. Just as he turns out, for example, to be a poor chess player. However, he can easily expound on Einstein’s theory of general relativity, give the rules of a real estate civil society or approach a moral dilemma with balance.

ChatGPT therefore acts as a perfect rhetorician, who, without understanding what he is talking about, searches in his memory – much superior to that of a human – for the most established phrases, therefore commonplaces, which he will then arrange in an answer. convincing.

Read also: Article reserved for our subscribers “We’re going to save so much time”: ChatGPT’s sorcerer’s apprentices

This fascinating rhetoric puts managers waiting for reports and teachers correcting dissertations in front of a disturbing mirror: must they still be satisfied with human syntheses, if they repeat, like ChatGPT, what the texts say?

But if they have to solicit, on the contrary, original proposals, will they be able to recognize them and respond to them themselves? Recent experimental research shows that for a leader or a teacher to be able to welcome innovative or surprising proposals, he must himself be able to detect the biases of his own knowledge and the limits of his creative process (Justine BoudierModeling and experimenting with a “defixing leader” in a situation of heterogeneous fixations, PSL thesis, Mines Paris, 2022)

You have 11.5% of this article left to read. The following is for subscribers only.

source site-30