Will the success of ChatGPT push AI into a world of secrets?


The largest artificial intelligence program in the world today comes from a company that, unlike many of its peers, does not publish its source code.

ChatGPT, created by OpenAI, is not available on GitHub. ChatGPT sources and GPT programs are also not easily accessible. And on Tuesday this week, the company took an important step by refusing to even disclose the technical details of the latest version of its engine, GPT-4.

ChatGPT and GPT-4’s lack of transparency is a departure from common practice in the field of deep learning. Because AI researchers, both in universities and in companies, have tended until now to publish their code, following the tradition of free software.

The closed nature of ChatGPT could become the standard in AI

The closed nature of ChatGPT could become the norm in AI and have ethical implications, warns AI pioneer Yoshua Bengio, scientific director of the Canadian MILA Institute for AI. “The academic way of thinking from researchers who were in academia and transitioned to industry changed the culture to bring more of an open source spirit, sharing and collaboration,” he says.

“But market pressures are likely to push in a different direction, towards secrecy,” he predicts. “Which is bad for ethical reasons, but also for technological progress. »

Yoshua Bengio was invited to participate in a conference by the Collective[i] Forecast, which presents itself as “an artificial intelligence platform designed to optimize B to B sales”. Asked if it was still possible for researchers to have an ethical framework in AI research, given the enormous commercial potential of ChatGPT, he replied that they “will continue to do open science and to share their work, because it is part of their model”.

The publication of research articles is essential

It is less obvious, according to him, that industrial research should stick to this. On the one hand, corporate research was “much more secret” before AI came into its own, he recalls. “And I wonder looking at the gold rush that’s likely to happen in AI, following the success of ChatGPT, are we going to keep that open culture in the industry? »

Publication of research papers is paramount, stresses Yoshua Bengio, as AI advances through a collective effort “of cross-pollination between labs.”

“These are complicated systems,” he says of large language models like GPT-4. “We build our code from the code of others, and we also build from the ideas that are written and evaluated in scientific papers all over the world – we build on each other’s progress. There are patents, but really what matters is what is in those articles. »

Make the world aware of the promises and risks of AI

Small companies, observes the scientific director, are generally more willing to take risks with untested software, because “it’s the game of business”. He alludes to programs such as ChatGPT which in some cases produced results that some users found “troubling”, suggesting they weren’t quite there.

“But today, companies like Google, Microsoft and others feel compelled to embark on the race for secrecy,” he argues. “So one of the concerns is whether they’re going to be as careful about what they put out. »

Yann LeCun, who along with Yoshua Bengio received the Turing Award in 2019 for his work on AI, expresses similar concerns. In a tweet from February 17, the chief scientist for AI at Meta wrote that Facebook AI Research has played a key role in opening up AI research and development. “Others followed. At least for a while. Today, OpenAI, DeepMind, and maybe even Google publish and open source clearly less. What will be the consequences for the progress of AI science and technology? “, he asked then.

According to Yoshua Bengio, the launch of ChatGPT can have a positive effect in making the world aware of the promises and risks of AI. “What I like about the media circus around ChatGPT is that it’s a wake-up call,” he says. “I think people have seen the advances in AI over the years and a lot of companies and a lot of governments thought that something was going on and the technicians were doing their job, not realizing it. realized that very powerful systems were on the way. »

Source: ZDNet.com





Source link -97