“The draft European regulation on artificial intelligence encourages business ethics”

Tribune. The European Commission (EC) published on April 21, 2021 a draft regulation establishing harmonized rules on artificial intelligence (AI). This project inaugurates a new form of regulation combining law, standards, ethics and “compliance” [mise en conformité]. This innovation, which may take several years to complete at the pace of the usual procedure, nevertheless opens the way to a new way of thinking and practicing European law, for which lawyers must already be prepared.

For the Commission, artificial intelligence is a rapidly evolving family of technologies which requires the establishment of new forms of control including space for continuous experimentation. This control should make it possible to prevent the risk of infringements by AI of the fundamental rights of the European Union (EU), while encouraging responsible innovation. The main challenge of this new regulation is to define rules governing behavior and AI products that have not yet been considered, which breaks with the age-old logic of legislating on the “known”.

Article reserved for our subscribers Read also “The European project to regulate artificial intelligence is forgetting the citizens”

To this end, the Commission is proposing a new legal order comprising, on the one hand, the EU Charter of Fundamental Rights and, on the other hand, specific regulations such as that of AI, the whole being intended to prevent possible violations of some of these rights (right to human dignity, to respect for private life and to the protection of personal data, non-discrimination and equality between women and men). This prevention system provided for by the European Commission is based on monitoring the implementation of compliance systems by companies according to identified “IA risk” levels (unacceptable, high, medium, low). The IA regulation is therefore a compliance regulation.

Preventive “good behavior”

The Commission also encourages companies to anticipate these risks in the design and operation of their AI products, by defining preventive “good behavior” internally through codes of conduct. In this sense, the draft IA regulation is a regulation that encourages business ethics. It also provides for a certification system for companies’ IA compliance systems by “assessment bodies”, with a “CE” mark. To obtain this certification, companies will have to set up a “quality management” system that can be found in the standards of the International Standard Organization (ISO). [qui édicte les normes techniques imposées aux entreprises]. The IA regulation is therefore a standard regulation.

You have 44.9% of this article left to read. The rest is for subscribers only.