AI Act: France renounces guerrilla warfare and ratifies


After the conclusion of a provisional agreement on the AI ​​Act, the European regulation on artificial intelligence, the French general staff expressed a certain dissatisfaction. Statements from President Macron and members of the government suggested that amendments remained possible.

France, associated with its German and Italian neighbors, in declared war against bureaucracy, ultimately failed – or partially gave up – to influence the text in favor of so-called foundation AI models. On February 2, the 27 EU member states ratified the AI ​​Act.

“Perfect” balance between innovation and security

“The AI ​​Act has unleashed passions… and rightly so!” admits European Commissioner Thierry Breton on X. Passions had clearly calmed among the leaders of the 27 at the time of the vote. The French reserves are (almost) forgotten.

“Today, all 27 Member States endorsed the political agreement reached in December – recognizing the perfect balance found by negotiators between innovation and security,” congratulates himself the European Commissioner. This balance was, however, contested.

“We are the first place in the world where on so-called foundational AI models, we will regulate much more than the others. I think it’s not a good idea and I say it in all honesty,” the French president reacted in December.

The ratification of February 2, however, does not mean total failure. Some concessions were able to be obtained. The original agreement thus provided for transparency on the training data of generative AI. This data is key to the performance of the models.

Business secrecy an exception to transparency

For the authors, this transparency also meant an obligation to provide information and therefore openness to potential compensation. The new version of the AI ​​Act provides for an exception in the name of business secrecy. A development hoped for by European AI startups, such as Mistral AI.

In December, Emmanuel Macron also called for a regular review of the provisions of the AI ​​Act. “If we lose leaders or pioneers because of this [Ndlr : la régulation des modèles fondation], you will have to come back. It’s key,” he justified.

A request heard by other EU member states. The February 2024 agreement provides for a regular review of the various obligations in the text. A half-victory for the camp presented as pro-innovation to which France and Germany belong in particular.

The broad outlines of the AI ​​Act remain unchanged. The regulation thus classifies artificial intelligence systems into three risk levels. Each has its own obligations. Systems whose risk is judged to be low are exempt.

Regular review of obligations

On the contrary, when the risk is high, designers will have the obligation to put in place a risk mitigation system and human supervision. They will also have to provide documentation and respect commitments in terms of cybersecurity and user information.

The 3rd category includes AI systems presenting unacceptable risk. The creators of these technologies will have 6 months after the entry into force of the AI ​​Act will be able to terminate these AI systems. This ban covers in particular facial recognition in public spaces and social rating systems.

Note that specific obligations apply to LLMs, the large language models at the heart of generative AI solutions. These technologies are classified into two families. For general-purpose AI, their creators must, for example, provide technical documentation, guarantee compliance with copyright legislation and communicate a summary of the training content.

Regarding AIs with systemic risk, they are subject to reinforced obligations. Who will be affected? This remains to be precisely defined. Large generative AI models like those from OpenAI should fall into this category.

For publishers, this means assessment, including systemic risks (and their mitigation) and conducting adversarial tests. They will also have to take security measures to protect the models and measure energy efficiency.





Source link -97