Google fires an engineer after his disturbing discussion with an AI: she was afraid of being unplugged


Mathilde Rochefort

June 13, 2022 at 11:00 a.m.

28

Google Logo © Mitchell Luo / Unsplash
© Mitchell Luo / Unsplash

Blake Lemoine, an engineer at Google and researcher in the field of AI, was fired by the Mountain View firm after claiming that a chatbot he was conversing with was in fact a sentient being; statements with which Google does not agree at all.

In a post published on MediumBlake Lemoine gives some examples of his conversations with LaMDA (Language Model for Dialogue Applications), a language model that could be used in tools such as Google Assistant.

The AI ​​would be comparable to a 7 or 8 year old child

LaMDA is a neural network: it acquires skills by analyzing large amounts of data. This field of artificial intelligence is evolving because for some years now, neural networks have been able to learn from an immense amount of information. In the case of language, this may for example concern unpublished books and Wikipedia articles by the thousands.

According to Blake Lemoine, LaMDA is comparable to a 7 or 8 year old child. In the excerpts from the conversations he had with the model, the latter explains in particular that he wants ” prioritize the welfare of humanity ” and ” be recognized as an employee of Google rather than property » ; he also mentions his fear of being unplugged “. These different statements prompted Lemoine to demand that Google ask for the consent of the computer program before performing experiments on it. His claims were based on his religious beliefs, Blake Lemoine being a priest, which the company’s human resources department, according to him, did not respect.

Our team of ethicists and technologists have reviewed Blake’s concerns in accordance with our AI Principles and advised him that his claims are unsubstantiated. Some in the AI ​​community see the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing current conversational patterns, which are not sentient Google spokesperson Brian Gabriel said in a statement.

Experts refute the possibility of sentient AI

While Blake Lemoine is categorical and is convinced that artificial intelligence is a sentient being, many experts in the sector believe that it is impossible, as research in neural models is not advanced enough to achieve such a result. . While they are able to summarize articles, answer questions, generate tweets and even write blog posts, they are not powerful enough to achieve true intelligence, believes Yann LeCun, head of computer science research at Meta .

For his part, Blake Lemoine explained that he had submitted several documents to the office of a US senator, claiming that they provided proof that Google and its technology practiced religious discrimination.

This is not the first time that AI research at the Mountain View firm has been disrupted. In March, Google fired a researcher who sought to publicly disagree with work published by two of his colleagues. In addition, the dismissals of two AI ethics researchers, Timnit Gebru and Margaret Mitchell, who had criticized Google’s language model, caused considerable controversy internally.

Last year, Google did not hesitate to ask its researchers to moderate their studies on sensitive subjects.

On the same subject :
Google’s AI can now create an image from text

Sources: Business Insider, The New York Times



Source link -99