the ethics committee proposes to regulate services like ChatGPT more than “open source” models

How to regulate large artificial intelligence (AI) models without limiting innovation? This sensitive issue is at the heart of the tensions around the AI ​​Act, the draft European regulation. Some, including the European Parliament, want to impose strong obligations on the language processing models behind services like ChatGPT. Others, such as the President of the Republic Emmanuel Macron and the French start-ups Mistral AI or LightOn, oppose measures which they believe would prevent the rise of European flagships in this field. In an attempt to find a way out of the conflict, the National Pilot Committee for Digital Ethics (CNPEN) proposes a solution, in an advisory opinion issued on Wednesday July 5.

Read also: Article reserved for our subscribers AI: Emmanuel Macron wants to create French competitors of OpenAI or Google models

“We propose to distinguish between models placed on the market, subject to the same strong obligations as high-risk artificial intelligence applications, and models published in open access, which would only be required to be transparent and to publish evaluations », explains Raja Chatila, professor emeritus at the Sorbonne and co-rapporteur of the opinion commissioned by the Minister Delegate for Digital Affairs, Jean-Noël Barrot.

In models ” put on the market “ The interfaces offered to the general public, such as ChatGPT, the now famous conversational robot launched in December 2022 by the American company OpenAI, would be classified. Or its competitors like Bard, from Google. It would also include the major language processing models that companies can query remotely, thanks to an interface called API, against payment of a few centimes per request: among them, GPT-4, from OpenAI, or their equivalents launched by Google, for example. Image generation models such as Dall E2 or Midjourney would be affected in the same way.

Read also: Article reserved for our subscribers The “father” of ChatGPT, Sam Altman, on a diplomatic tour in Paris and Europe

Syears interface for the general public

On the other hand would be considered “free access” similar models published in open source, without an interface for the general public. A distribution method that the start-up Mistral AI, in particular, has said it wants to adopt for its future software. These models would only be required to be transparent, for example for the data used for their training, and to publish benchmark tests, for example on known biases, such as sexism. However, in the event that a company uses it for a service deemed high risk in the AI ​​Act (for example, for a health diagnosis, access to a public service or employment), it would be subject to more stringent obligations.

You have 34.11% of this article left to read. The following is for subscribers only.

source site-30