Ex-Google executive who “fears the excesses of AI” accused of misplaced opportunism


Did Geoffrey Hinton express his regrets a little too quickly? Several ex-employees of Google accuse the former engineer of opportunism. Hinton would have hidden criticism of the AI ​​internally.

The truth is sometimes multiple. On May 1, Geoffrey Hinton, one of the pioneers of AI, a specialist in neural networks, said in an interview with the New York Times that he had left Google to speak more freely about the dangers of artificial intelligence. He then said that he regretted part of his work and consoled himself with ” the usual excuse: if I hadn’t done it, someone else would have done it “.

Geoffrey Hinton then explained that he feared abuses in terms of false information. Worse still, the engineer feared a disaster scenario worthy of a science fiction film, in which the machine would surpass human intelligence (the arrival of an AGI) with serious repercussions on society. A cliché widely criticized since.

A career with blinkers

According to the testimony of his former acquaintances at Google, Geoffrey Hinton would have become aware of the ethical problems related to AI very recently. In 2020, when Timnit Gebru was fired by Google after publishing a scientific study on the biases of artificial intelligence systems, Hinton reportedly remained silent.

Timnit Gebru, licensed by Google in 2020. // Source: Kimberly White

The study in question referred to the environmental costs, the financial costs, ” stereotyping, denigration, rise in extremist ideology and wrongful arrests which can be induced by large language models (LLM).

A late awakening criticized on Twitter by Margaret Mitchell, once co-head of the ethics division for AI at Google, also fired shortly after the case: “ This would have been the time for Dr. Hinton to denormalize the dismissal of Timnit Gebru (not to mention those that followed recently). He did not do it. »

And to add: This is how systemic discrimination works. People in positions of power normalize. They practice discrimination, they watch their peers practice it, they say nothing and continue. »

Invisibilization of the current harms of AI

Worse still, interviewed on CNN on May 5, Geoffrey Hinton continues to make the facts discovered by Timnit Gebru invisible. His ideas would not be so existentially serious than his own theory of machine dominance over man.

Minorities are sometimes made invisible by LLMs.  // Source: Gerd Altmann / Pixabay
Minorities are sometimes made invisible by LLMs. // Source: Gerd Altmann / Pixabay

It is amazing that anyone can say that the harms [de l’IA] happening now – and felt most acutely by historically marginalized people, like black people, women, people with disabilities, precarious workers – are non-existential “Said Meredith Whittaker, president of the Signal Foundation and AI researcher, relayed by Fast Company.

Ex-Googler, Meredith was also ousted from Google in 2019. Google allegedly accused her of campaigning against a contract with the US military for AI technology for drones. Whittaker criticizes Hinton for continuing to discredit critics other than his own.

Misplaced opportunism or invisibilization of the work of his colleagues? Geoffrey Hinton’s recent remarks are more than ever to be put into perspective.


Do you want to know everything about the mobility of tomorrow, from electric cars to pedelecs? Subscribe now to our Watt Else newsletter!





Source link -100