Yann LeCun (Meta): “AI has caused a renaissance of R&D in tech”


For Yann LeCun, chief AI scientist for Meta, deep learning, one of the techniques of artificial intelligence, has led to a kind of renaissance in the field of tech R&D. .

“The type of techniques we worked on had a much greater, much broader commercial impact” than in previous periods of artificial intelligence (AI) development, Yann LeCun said during a small meeting this month. -this.

“And the result is that a lot of research funding has been attracted and a renewal of industrial research has taken place. »

Why do tech giants do fundamental R&D?

There are still 20 years, recalled the scientist, Microsoft Research was the only industrial entity that “had a certain stature in information technology”. But then the 2010s saw “Google Research arrive, and FAIR [Facebook AI Research]which I created, and a few other labs basically revived the idea that industry could do basic research”.

This resurgence of corporate R&D is happening, believes Yann LeCun, “because the prospect of what can happen in the future, and what happens in the present, thanks to these technologies, is great”.

According to the scientist, the value of applied AI leads to a two-way system. In one case, corporate R&D maintains long-term projects, “moonshot” type projects. Another path channels research towards more practical product applications.

“It makes perfect sense for a company like Meta to simultaneously have a large research lab that has ambitious long-term goals, like creating virtual assistants with human-level intelligence, because it is what we want, in the end. But at the same time, the technology that has been developed is already useful. »

The example of ABS and autonomous cars

“For example, content moderation and speech detection in multiple languages ​​have been completely revolutionized in the last two or three years by large pre-trained Transformers in a self-supervised way,” says Yann LeCun, referring to the program of Google’s Transformer natural language processing, introduced in 2017, which has become the basis for many programs – including OpenAI’s ChatGPT.

This program “has made enormous progress, incredible progress,” he says. Progress that is due “to the latest research in AI”.

The scientist was invited to participate in a one and a half hour conference organized by the Collective[i] Forecast, an interactive online discussion series hosted by Collective[i]. The latter presents itself as “an AI platform designed to optimize B to B sales”.

He said he was “optimistic” about the ability of applied AI to be used for the good of society. Even when AI fails to achieve certain goals, it produces effects that can be beneficial, he argues.

As an example, he cites autonomous vehicle systems which, if not truly autonomous, have had the effect of providing road safety devices that have saved lives. “Every car released in Europe must now be equipped with an automatic emergency braking system, ABS”. And for him, the use of ABS is comparable to “systems that allow the car to drive itself on the highway”. The braking mechanism reduces collisions by 40%. “So despite what you might hear about a Tesla crashing into a truck or whatever, these systems are saving lives. To the point that they become obligatory”.

The Big Business of Using AI in Science

“What I find quite promising is the use of AI in science and medicine” to improve lives, assures the head of Meta. “A large number of experimental systems are improving the reliability of diagnosis from MRIs and X-rays for a number of diseases,” he says. “This is going to have a huge impact on health. »

These advances, while positive, are small, he adds, compared to “the big deal” – namely “how AI will be used for science”.

“We have systems that can fold proteins. We now have systems that can engineer proteins to stick to a particular location. Which means we can design drugs completely differently than we’ve done in the past. »

AI to the rescue of battery development?

AI also has “enormous potential for progress in the field of materials science”, believes the scientist. “And we are going to need it, because we have to solve the problems related to climate change. In particular, we need to be able to have high-capacity batteries that don’t cost a fortune, and that don’t require you to use exotic materials found in one place. »

On this subject, Yann LeCun cited Open Catalyst, founded by his colleagues at FAIR, which is collaborating with Carnegie Mellon University to apply AI to the development of “new catalysts for use in the storage of renewable energies in order to contribute to the fight against climate change”.

“The idea is to cover a small desert with photovoltaic panels and store the energy used by these panels, for example in the form of hydrogen or methane,” he explains. Current approaches to storing hydrogen or methane products are “either scalable or efficient, but not both,” he said. “We might be able to discover a new catalyst using AI that would make this process more efficient or scalable by not requiring an exotic new material. It may not work, but it’s worth a try. »

Despite these many promising commercial applications, the scientist suggests that the narrowness of industrial uses falls short of the larger goal of AI, which is the quest for animal- or human-level intelligence.

The limits of AI scaling

The enormous research advances that underpin today’s applications have been made possible in the era of deep learning by unprecedented availability of data and computing power, the scientist recalls, as fundamental scientific progress has not always been so abundant or so rich.

“What caused the more recent wave was first a few conceptual advances, but more importantly the amount of data available and the amount of computation that made scaling these systems possible. »

Large Language Models (LLMs) like GPT-3, the computer program on which ChatGPT is based, are proof that scaling AI, i.e. adding more layers of adjustable parameters, directly improves the performance of programs. “They turn out to work really well when you scale them up,” he says of GPT-3 and its ilk.

But according to Yann LeCun, the industry risks seeing its returns diminish at some point if it just scales without exploring other avenues: “A lot of companies like OpenAI, in particular, have used this as a a mantra. It would suffice to make things bigger, and it would work. But I think we’re hitting those limits right now. »

Despite scaling ever larger models, “we don’t seem to be able to train a complete autonomous driving system by simply training larger neural networks on more data; that doesn’t seem to allow us to get there”.

Impressive as they are, programs such as ChatGPT, which Yann LeCun called “not particularly innovative” and “nothing revolutionary”, do not possess scheduling capability.

And the limits of responsiveness versus planning

“They are completely responsive,” underlines Yann LeCun. “You give them a context of a few thousand words”, i.e. the prompt typed by the human, “and then from that the system just generates the next movement in a completely reactive way” . “There’s no planning or breaking down a complex task into simpler ones, it’s just reactive. »

It offers the example of the OpenAI CoPilot program, which has been integrated by Microsoft into the GitHub code management platform. “There is a very strong limitation of these systems,” he explains. “They are mostly used as a predictive keyboard on steroids. »

“You start writing your program, you write a description of what it should do in the comments, and you have tools based on great language models that will complement the program,” he adds.

The great search for productivity gains thanks to AI

Such autocompletion is similar to cruise control in cars. “Your hands should stay on the wheel at all times” because Co-Pilot can generate errors in the code without you realizing it.

“The question is how to move from systems that generate code that sometimes works and sometimes doesn’t,” he argues. “And the answer to that question is that not all of these systems today are able to plan; they are completely responsive. And that’s not what it takes to generate “intelligent behavior.” »

On the contrary, to have “intelligent behavior, a system capable of anticipating the effect of its own actions is needed”. You also need to have “some kind of internal model of the world, a mental model of how the world will change as a result of your own actions”.

Last summer, the scientist wrote a thought piece on the need for programs with planning capability, a topic he discussed at length with ZDNET in November.

So far, the resurgence of IT research and development in business has yet to lead to technology’s most valuable outcome, productivity, he believes, but it could happen. over the next decade.

Citing the work of researcher Erik Brynjolfsson of Stanford University’s Human-Centered Artificial Intelligence group, he notes that economists view AI as a “general-purpose technology”, meaning something that “will slowly diffuse through all sectors of the economy and industry and fundamentally affect all economic activity” through various effects – the creation of new jobs, the displacement of other jobs, etc. – and will lead to an increase in productivity, because it fosters innovation. In other words, innovation that builds on innovation is the economic equivalent of productivity.

“What Eric, in particular, said is that at least until very recently we haven’t seen any increase in productivity due to AI and historically he says that it takes about 15, 20 years to see a measurable effect on productivity from a technological revolution. »

“So according to his prediction, this is probably going to happen within the next 10 years. »

According to him, the resurgence of fundamental business R&D in the field of information technology may have some durability, given its appeal to young researchers.

“We have observed that young talents now aspire to become AI researchers, because it’s cool. Whereas before, the same people would have gone into finance”, emphasizes the scientist. “It’s better for them to go into science, I think. »

Source: ZDNet.com





Source link -97