Language has a central place in AI, according to Noam Chomsky


After a year-long hiatus, the annual artificial intelligence debate hosted by Montreal.AI and by New York University professor emeritus and AI expert Gary Marcus returned last Friday in an exclusively virtual – like in 2020.

This year’s debate, entitled “AI Debate 3: The AGI Debate”, focused on the concept of general artificial intelligence, that is, the notion of a machine capable of integrating a myriad of reasoning capacities approaching those of humans.

While the previous debate brought together a number of AI scholars, last Friday’s meeting attracted 16 participants from a much wider range of professional backgrounds, including American linguist and activist Noam Chomsky.

What do AI models tell us about language and thought?

To open the discussion, Gary Marcus got down to a humorous presentation of the “very brief history of AI”. According to him, contrary to the enthusiasm of the decade following the historic success of ImageNet, the “promise” of machines doing various things has not borne fruit. He referred to his own article in the New Yorker, which cast a chill over the issue. Nevertheless, the AI ​​specialist believes that “ridiculing skeptics has become a hobby”, and that his criticisms and those of others have been sidelined in the enthusiasm for AI.

However, at the end of 2022, “the narrative started to change”, he observed. He cites headlines about Apple’s postponed self-driving car and negative remarks from Meta’s Yann LeCun as examples. But also the critical essay “Artificial Intelligence Meets Natural Stupidity” by the late Drew McDermott, of MIT’s Artificial Intelligence Laboratory, who died this year.

After this introduction, Noam Chomsky also did not go easy on it, insisting on what cannot be achieved by current approaches to AI.

Systems “tell us nothing about […] what it is to be human”

“The media are circulating important reflective articles on the miraculous accomplishments of GPT-3 and its descendants, most recently ChatGPT, and comparable accomplishments in other fields, and their significance to fundamental questions about human nature” , says the famous linguist. But “beyond usefulness, what do we learn from these approaches about cognition, thought, in particular language, an essential component of human cognition”? he wondered. “Many flaws have been detected in the large language models,” and “by design, the systems make no distinction between possible and impossible languages,” he points out.

And to continue: “The more systems are improved, the deeper the failure becomes […] They tell us nothing about language and thought, about cognition in general, or what it is to be human. We understand that very well in other areas. No one would pay attention to a theory of elementary particles that did not at least distinguish between those that are possible and those that are impossible. […] Is there anything worthwhile in, say, GPT-3 or more sophisticated systems like this? It is quite difficult to find them. One may wonder what is the point? Can there be a different AI, the one that was the goal of the pioneers of the discipline, like Turing, Newell and Simon, and Minsky, who saw AI as part of emerging cognitive science, an AI that would help the understanding of thought, language, cognition and other domains, which would help answer the kinds of questions that have been prevalent for millennia (…)? »

For Noam Chomsky, today’s impressive language models like ChatGPT “tell us nothing about language and thought, about cognition in general, or about what it is to be human”. According to him, a different kind of AI is needed to answer the Oracle of Delphi’s question about who we are. Image: Montreal.ai and Gary Marcus.

Picking up on Noam Chomsky, Gary Marcus mentioned four unresolved elements of cognition: abstraction, reasoning, compositionality, and factuality. He then showed illustrative examples where programs such as GPT-3 fail based on each item. For example, when it comes to “factuality,” deep learning programs retain no model of the world from which to draw conclusions.

The central place of language

Gary Marcus interviewed Noam Chomsky about the concept of “innateness” found in his writings, that is, the idea that something is “embedded” in the human mind. Should AI pay more attention to innateness?

According to Noam Chomsky, “any form of growth and development from an initial state to a stable state involves three factors”. The first is the internal structure of the initial state. The second concerns “incoming data”. The third, meanwhile, covers “the general laws of nature”. “It turns out that innate structure plays an extraordinary role in all the areas that we discover,” he insists.

Things that are seen as a paradigmatic example of learning, like language acquisition, “as soon as you start taking it apart, you find the data has almost no effect; the structure of options for phonological possibilities has a huge restrictive effect on the types of sounds that will even be heard by the infant […]. The concepts are very rich, almost no proof is needed to acquire them […] “.

According to the linguist, there are practically no genetic differences between human beings. Language, he explains, has not changed since the emergence of humans, as evidenced by the fact that any child in any culture is capable of acquiring language. Noam Chomsky thus proposes that language is at the heart of AI to understand what makes humans so unique as a species.

Source: ZDNet.com





Source link -97