Understand everything about the IA Act, the European text regulating artificial intelligence


MEPs must decide this Thursday, May 11, 2023 on a position on the AI ​​Act, which aims to frame the development of artificial intelligence within the Member States. Although it is still subject to change, this text aims to reduce the risks associated with AI on several fronts, in terms of data protection, transparency, and even security and ethics.

It is important to go quickly. We really need our legislation to adapt. ” The Vice-President of the European Commission Margrethe Vestager affirmed it again on Monday, May 8, 2023: the European Union ” don’t have time to waste to regulate artificial intelligence. It must be said that the first version of theIA Act was introduced two years ago.

What is the AI ​​Act?

I’Artificial Intelligence Act (IA Act), the European Commission’s draft regulation on AI, was first proposed in April 2021. The text aims to provide a uniform legal framework for the use and commercialization of artificial intelligence. To protect users, AIs are categorized according to the seriousness of the risks they pose, with safeguards adapted to each type.

Where we are ?

This Thursday, May 11, MEPs must vote in committee to decide on a common position, which will have to be confirmed in plenary in June. Tedious negotiations must then begin between the Parliament, the Commission and the Member States. For her part, Margrethe Vestager estimated on Monday May 8 that theIA Act should be ” adopted by the end of the year “.

European Union flag. // Source: Canva

Which systems would be prohibited?

In the April 2021 draft regulations, certain AIs will be prohibited altogether. Are concerned :

  • Systems establishing a “social grade”which classify people according to their reliability, for example, and can lead to “ harmful or unfavorable treatment » ;
  • Biometric identification systems remotely and in real time in spaces accessible to the public for law enforcement purposes », including by the authorities;
  • Systems that aim to manipulate through subliminal techniques acting on the unconscious;
  • Systems targeting vulnerable people such as children or people with disabilities.

What systems would be conditionally permitted?

  • systems with ” high risk »because having a significant adverse impact on the health, safety and fundamental rights of citizens “, such as medical machines, facial recognition systems or autonomous cars, for example.

These AIs, classified as very risky, will be authorized subject to controls carried out by national agencies. These audits will be conducted by independent third parties.

In France, the period of the Olympic Games will also be an opportunity to experiment with algorithmic video surveillance, which uses artificial intelligence technologies. The test period will run for six months, until March 2025.

  • Systems with specific handling hazards »i.e. that interact with humans, are used to analyze emotions or identify social categories through biometric data, or generate content such as ” ultra-realistic video effects “.

These systems will have to be accompanied by specific transparency obligations, in this case, a warning that their content is ” generated by automated means “. The Midjourney software has already caused controversy after the false news images it generated were mistaken by some Internet users for real shots.

In this new regulation, and we are the first to do so, everything that will be generated by artificial intelligence, whether text (everyone now knows ChatGPT) or images, [comportera] an obligation to serve [que cela] was made by an artificial intelligence », affirmed the European Commissioner for the Interior Market Thierry Breton at the beginning of April on France info. The concrete means of tracing AI productions have not yet been specified, however.

If ChatGPT could correspond to several of these categories of AI, the Secretary of State for Digital Jean-Noël Barrot has already assured that he does not wish to ban it, as several European countries have already tried. The national ethics committee must give its opinion within a few months about the chatbot.

Which systems would be allowed without reservation?

  • All other types of AI will not require special assessment or measures. This is the case, for example, of connected objects using AI.

    These systems will simply have to respect fundamental rights and European law, and in particular the General Data Protection Regulation (GDPR).

What measures to promote innovation?

The text also seeks to stimulate innovation by allowing the creation of “regulatory sandboxes”, that is to say controlled environments that will be used to try out new technologies for a limited time.

What specific protections on generative AI?

Among the amendments added to the text that will be voted on this Thursday, a provision will require that all generative AIs like ChatGPT, Midjourney and DALL-E disclose what copyrighted content they used to train their language model.

The issue of copyright in training materials is already at the heart of several legal disputes. Thus, at the beginning of the year, we learned that the image bank Getty Images had filed a complaint against the company Stability AI for having used its catalog to develop its IA Stable Diffusion.

Generative AI models will also need to be tested to mitigate foreseeable risks to health, safety, human rights, the environment, democracy and the law, involving independent experts, reports Computerworld.com. Unavoidable hazards must be described in precise documentation. One can, for example, think of the use of ChatGPT to write malicious code which could then be used in computer attacks or other illegal acts.

In order to explore in depth the fascinating world of AI, Numerama has created a new free newsletter: Artificial. A project designed by artificial intelligence and verified by Numerama, to which you can subscribe using the form below.

The data transmitted through this form is intended for PressTiC Numerama, in its capacity as data controller. These data are processed with your consent for the purpose of sending you by e-mail news and information relating to the editorial content published on this site. You can oppose these e-mails at any time by clicking on the unsubscribe links present in each of them. For more information, you can consult our entire personal data processing policy.

You have a right of access, rectification, erasure, limitation, portability and opposition for legitimate reasons to personal data concerning you. To exercise one of these rights, please make your request via our dedicated rights exercise request form.


Subscribe to Numerama on Google News to not miss any news!

Understand everything about experimenting with OpenAI, ChatGPT





Source link -100