For Arthur Mensch (Mistral AI), the AI ​​Act is targeting the wrong target


As a French nugget of AI – expected as an alternative to American solutions – the company Mistral AI is regularly questioned about its position with regard to the European regulation currently being developed.

However, for Arthur Mensch, the analysis of the startup gives rise to numerous extrapolations. On X, the manager therefore insisted on clarify his opinion regarding the text. According to the CEO, the current version has strayed far from its basics. Initially, he explains, the AI ​​Act was about product safety.

A balanced first version of the AI ​​Act

“Product safety laws benefit consumers. Poorly designed use of automated decision-making systems can cause significant harm in many areas,” he writes.

In its reading, the AI ​​Act aimed to proportionate the regulation according to the level of risk of a use case. Between entertainment application and health tool, the requirements should therefore differ.

“The first European law on AI has found a reasonable balance in this regard,” judges the boss of Mistral AI. And he denies any hostility towards restrictive regulations. “The many voluntary commitments we see today have little value,” he says.

The emphasis on safety should, however, remain the priority of the AI ​​Act, he adds. But for Arthur Mensch, the legislator now has another hobby horse and would therefore propose to regulate “foundation models”, i.e. “the driving force behind certain AI applications.”

Regulating models is a mistake

“We cannot regulate an engine without uses. We do not regulate the C language because it can be used to develop malware. On the contrary, we prohibit malware and strengthen network systems (we regulate usage),” he argues.

Arthur Mensch believes that if regulators wish to influence the development of foundation models, the focus of product safety remains relevant. Because rules in this area “will naturally impact how we develop models.”

Problem: Recent versions of the AI ​​law have started to tackle poorly defined “systemic risks”, regrets the director of Mistral AI. The taxonomy proposed by the text is described as “the worst possible”.

AI giants favored by the law

Arthur Mensch believes that current regulations favor “incumbent companies that can afford to face onerous compliance requirements.” For those who “do not have an army of lawyers”, this would, however, be the sanction.

“From a mechanical point of view, this goes against the development of the European AI ecosystem,” he writes again. To make his voice heard, Arthur Mensch does not only have Twitter. In September, he joined the IA Gen committee responsible for advising the French government on its AI strategy.

France has since joined forces with Germany and Italy to wage war against unnecessary bureaucracy. The trio encourages Europe to place AI at the heart of its industrial policy and to adopt a strategy to support supply and therefore its ecosystem threatened by the AI ​​Act.





Source link -97