Regulation of AI: Europe at a time of choice


Scheduled to come into force in mid-2025, the AI ​​Act must regulate the use of artificial intelligence in the European Union. Originally, when the project was presented in April 2021, the founding principle of the future regulation had the advantage of simplicity since it was based on a classification of AI by risk levels in critical areas of health, security or law.

AI models creating an “unacceptable” risk such as population surveillance devices based on facial recognition are obviously prohibited. Likely to generate discrimination, matching solutions in recruitment, banking scoring or predictive justice present a high risk and must be regulated.

Then ChatGPT came along…

The arrival over the last year of ChatGPT and more generally of generative AI has changed the situation. Beyond the risk-based approach, the AI ​​Act now intends to regulate these “foundation models”, namely, as Wikipedia explains, large models, based on a very high volume of unlabeled data. , usually driven by self-supervised learning.

Like what it was able to do with the Digital Markets Act (DMA), the European Union plans not only to regulate uses but also to regulate the players in large language models (LLM) such as ChatGPT d ‘OpenAI/Microsoft, Bard from Google or LLaMA from Meta. Due to their power, their tools which can carry out a large number of tasks are considered to have high impact and present potentially systemic risks.

Described as “high-performance foundation models”, they could be subject to additional obligations and regular checks. Other categories of AI are considered depending on the number of users, such as general purpose AI (GPAI).

France, Germany and Italy in ambush

As the text enters its final so-called trilogue legislative phase – the EU Council, the European Commission and the European Parliament come together to reach a consensual outcome – this multi-level approach is hotly contested. Three founding countries, France, Germany and Italy, are asking Spain’s current presidency of the Council of the EU to renounce it.

According to the site EURACTIVthe trio considers that the rules applying to foundation models “would go against the technological neutrality and risk-taking approach of artificial intelligence regulation, which is supposed to preserve both innovation and security. »

The argument of restricted innovation is taken up by tech players. Bringing together around thirty national organizations including the French Afnum, Digital Europe fears, in a press release, that the regulation will nip the development of new models in the bud, “ many of whom were born here in Europe “.

For the lobby group, “ risk-based approach must remain at the heart of AI law “. Technologically neutral, the regulatory framework must focus on “ on truly high-risk use cases » and not qualify certain AI models as high risk by default.

Finally, Digital Europe highlights the cost of compliance for companies in the sector. Bringing a single AI-based product to market would cost more than 300,000 euros for an SME with 50 employees, according to the Commission’s own data.

In France, another luxury lobbyist, Cédric O, plays the go-between. The former digital minister sits on the committee of experts responsible for informing the government on its national strategy for artificial intelligence while advising Mistral AI, the French flagship in the field.

A mistake with serious consequences

CEO of Mistral AI, Arthur Mensch makes no secret of his opposition to the current version of the AI ​​Act. He considers it far from its original spirit which aimed to proportion the regulation according to the level of risk of a use case. Now wanting to regulate foundation models, i.e. “ the driving force behind some AI applications ”, is, according to him, an error.

“We cannot regulate an engine without uses, he argues. We do not regulate the C language because it can be used to develop malware. On the contrary, we prohibit malware and strengthen network systems (we regulate usage).”

For the young startuper, the regulations as currently proposed favor “Incumbent companies that can afford to face onerous compliance requirements “. New entrants who “ don’t have an army of lawyers » will, however, be sanctioned.

If it is not amended in the coming weeks, the AI ​​Act could, more generally, curb innovation and call into question European sovereignty in the field of AI by weakening its players in the face of their American and Chinese.



Source link -97