AI Act: here are the 3 levels of risk established by the European regulation


It was a painful birth. After 37 hours of negotiations spread over three days, the agreement was finalized on December 8, shortly before midnight. The European Commission, the European Parliament and EU member states have reached an agreement to provide the Old Continent with the first regulatory framework for artificial intelligence in the world. Called AI Act, it could become a reference for other countries as the GDPR has inspired extra-European regulations regarding the protection of personal data.

Led by the Spanish Presidency of the European Union, the discussions were lively. Two visions opposed each other. On the one hand, elected representatives of Parliament wanted to toughen the text to better protect the fundamental rights of citizens in the face of the threat posed by generative AI models, such as ChatGPT. On the other, a trio of member states – France, Germany and Italy – were keen not to restrict innovation and to preserve the interests of European startups like Mistral AI in France and Aleph Alpha in Germany.

Three levels of risk

The final text maintained the original risk-based approach. Three levels of risk are established.

  • 1. Minimal risk exempts AI systems from any obligation. A large majority of AI such as recommendation systems or spam filters present, in fact, minimal or no risk to the rights or security of citizens. On a voluntary basis, companies can nevertheless commit to respecting a code of good conduct to supervise them.
  • 2. AI models identified as high risk for health, safety or human rights will have to comply with strict requirements by putting in place a risk mitigation system and human supervision. They will have to provide guarantees in terms of documentation, cybersecurity and information to users.

Matching solutions in recruitment or banking scoring, potentially discriminatory, will fall into this category. A “regulatory sandbox” is planned to allow startups to develop and experiment with innovative AI before it is placed on the market.

AI systems that, like chatbots, interact with humans will have to explicitly warn them that they are interacting with a machine. Likewise, users must be informed when biometric categorization or emotion recognition systems are used. Artificially generated content, whether text, audio, images (deep fakes) or videos, will be labeled as such, in a readable and detectable format.

Facial recognition: many exceptions

  • 3. AI systems presenting unacceptable risk for the fundamental rights of people will simply be banned. The European Commission cites the “ AI systems or applications that manipulate human behavior to circumvent free will »such as mass social surveillance as practiced in China, or toys which, equipped with voice assistance, would encourage dangerous behavior in children.

The text also intends to ban certain predictive policing applications as well as facial recognition devices dedicated to maintaining order in public spaces. Under the leadership of France in particular, the text nevertheless introduces numerous exceptions on this last point. Real-time biometric identification will be authorized as part of the “targeted” search for a person convicted or suspected of having committed a serious crime, in the prevention of a terrorist threat or in the case of searching for victims of trafficking or of sexual exploitation.

Specific rules for generative AI

On the sensitive issue of general-purpose AI models, such as ChatGPT, the AI ​​Act introduces specific rules. “ For very powerful models that could present systemic risks “, the future regulation provides ” additional binding obligations related to risk management, serious incident monitoring, model evaluation and adversarial testing. »

To ensure ” transparency throughout the value chain »these obligations will be “ implemented through codes of good practice developed by industry, the scientific community, civil society and other stakeholders, in collaboration with the Commission. »

National competent authorities will oversee the implementation of the new rules at national level, while the European Commission plans the creation of a European AI Office to ensure coordination at Union level. For general-purpose AI models, a scientific group of independent experts will contribute to the classification of systems with these systemic risks.

As in any compromise, each party has therefore put water in its wine. As the Context site indicates, States have above all given up on foundation models – namely large AI models – and MEPs, on national security.

American tech pressure group, the CCIA (Computer & Communications Industry Association) speaks of “ missed opportunity for Europe “. By imposing strict obligations on developers, the future law is “ likely to slow down innovation in Europe.

Sanctions of up to 7% of turnover

Note that this regulation, if adopted, will not come into force until the beginning of 2026. The political agreement must be the subject of technical meetings and then be submitted to the formal approval of the European Parliament and the Council. During the transitional period, an “AI Pact” is proposed in order to encourage AI developers to commit, on a voluntary basis, to implementing the main measures of the AI ​​law before the legal deadlines .

As with the GDPR, the issue of compliance will be particularly sensitive and companies must anticipate the impacts of its implementation very early on. Those who do not comply with the rules will be liable to a fine of up to 35 million euros or 7% of annual global turnover. A whole barometer of sanctions is envisaged, from the provision of incorrect information to the implementation of prohibited AI. Reduced amounts are also provided for SMEs and startups.



Source link -97