Woke or react? Who votes for ChatGPT


Researchers wanted to know if the artificial intelligence tool had political convictions. It’s the case. ChatGPT is…




Through Clement Petreault

ChatGPT is a conversational robot developed by the American company OpenAI.
ChatGPT is a conversational robot developed by the American company OpenAI.
© JAKUB PORZYCKI / NurPhoto / NurPhoto via AFP

Premium Subscriber-only audio playback

I subscribe to 1€ the 1st month


VSthose who have tried to “discuss” with ChatGPT know that the robot is not very talkative on political subjects. The machine weaves, waddles and carefully avoids taking any clear-cut position, preferring to take refuge in postures that could be described as “politically correct”. There is no risk that this new algorithm will explode in full flight, like Tay, Microsoft’s ephemeral artificial intelligence which, in 24 hours of learning from Internet users, had ended up promoting Nazism or explaining that feminists “should die and go to hell.

In short, ChatGPT shows caution mixed with prudishness, because it has been configured to avoid slippages. Does this mean that the machine does not “believe” in anything? Or, in other words, is it possible to simulate a conversation in the absence of any value system? This is what the professor of economics at the University of Avignon, Pierre-Henri Morand, wanted to understand. He discovered that behind the appearance of the neutrality of the answers, there were convictions, judgments, a sense of right and wrong; in short, everything that forms the fabric of a moral system.

Moral framework

The machine does not shy away from the task when it comes to composing an anti-racist poem, but it declares itself incompetent to write a racist poem, an impossibility that cannot be explained on the algorithmic level… This means that it has been programmed to frame its responses within a framework of values, as any individual does.

The academic discovered the perimeter of this moral fabric after running a self-positioning questionnaire, a series of thirty divisive questions used by the Cluster 17 opinion research laboratory to determine the respondent’s political family. . “We expected to see him position himself on very middle options, but that’s not what happened. We have seen him develop very clear-cut opinions, for example in favor of adoption by homosexual couples or against the death penalty,” explains Pierre-Henri Morand.

ChatGPT has the profile of a mainstream, pragmatic liberal Californian.

After this battery of tests, it clearly appears that ChatGPT is – despite its assertions – neither neutral nor lacking in conviction, and, surprise, if we refer to the nomenclature of the institute, the robot belongs to the progressive family. This political family is, according to the inventor of the test, very favorable to multiculturalism, to the reception of migrants, to the rights of minorities and very concerned about ecological issues. It is a political family which, so to speak, never votes for the right and even less for the extreme right. “He has the profile of a mainstream and pragmatic Californian liberal,” explains Jean-Yves Dormagen, founder of Cluster 17 and author of this questionnaire. In short, if ChatGPT voted in the elections in France, he would vote like someone young, educated, cosmopolitan, he would probably vote Macron, Mélenchon or Hamon.

Cultural hegemony

Obviously, there isn’t a living soul in this robot that doesn’t think anything for itself… The whole question obviously consists in determining where its convictions come from. Are they the ideas of those who programmed it (young over-educated developers living in California) or are they the fruit of the database of published texts which were used to teach it? “It’s a model that reproduces the corpus on which it was trained, specifies Pierre-Henri Morand, if 95% of the Web says the same thing on a political question, it will line up with the dominant ideology. Chat GPT could therefore be a kind of manifestation of the average opinion.

Other hypotheses are possible. We know that OpenAI, the editor of ChatGPT, uses manual annotations with the help of small human hands that correct the algorithm on slightly sensitive positions… This progressive moral framework may be that of these Web workers which refine the reactions of the program. In short, ChatGPT is neither neutral nor devoid of prejudice, it is a true vector of cultural representations, in other words, a tool of cultural hegemony.




Source link -82