Federating, supervising, auditing… the CNIL’s action plan on AI


Sam Altman, the CEO of OpenAI, creator of ChatGPT in particular, participated this week in his first hearing in the United States Congress. The entrepreneur called for the adoption of rules for the regulation of artificial intelligence.

His call had been anticipated, including in Europe. The recent dynamic around generative AI had already led the authorities to wonder about the adaptation of the AI ​​Act to these technologies and their uses. In France, the Cnil, competent on AI, announces its action plan.

From augmented video surveillance to generative AI

The authority, through its innovation laboratory, Linc, moreover recently devoted a rich file to this theme of generative AI. Acculturation is however not the only mission of the Commission.

In 2023, she was already planning to take a very close interest in augmented cameras. The Cnil now specifies that it will extend its action by expanding “its work to generative AI, large language models and derived applications (including chatbots).”

In this context, the personal data protection agency will develop its initiatives around four components. The first will consist in understanding how AI systems work and their impact on people.

The Linc file is part of this process of understanding these AIs and their challenges in terms of fairness and transparency of processing, protection of data publicly accessible on the Web or security.

It is for the Cnil to provide answers, in particular to AI players. Many expressed their uncertainty about the application of the GDPR, for example for the training of generative models.

Concrete recommendations for AI players

Questions also arise regarding compliance with the European regulation on AI, which is currently being discussed. The responses provided by the authority will have to “allow and supervise the development of AI that respects personal data”. This is the second part of the action plan.

On AI, the Commission has an existing one, including its position on augmented video surveillance. She continues her “doctrinal work” and announces that she will soon publish several documents. Among these: a guide on the rules applicable to the sharing and reuse of data, or on the learning bases.

The work is in progress and will lead to several publications from the summer of 2023. These documents will include “concrete recommendations”, relating for example to the design of AI systems such as ChatGPT.

The Cnil also wishes to play a role in federating and supporting the French and European AI ecosystem. For 2 years, it has offered a ‘sandbox’, used in particular on health and education. A new call for projects in 2023 will focus on AI in the public sector. Other devices exist. They will be mobilized this year.

Dialogue and monitor compliance

“The CNIL wishes to engage in a lively dialogue with research teams, R&D centers and French companies developing, or wishing to develop, AI systems in a logic of compliance with personal data protection rules”, she summarizes. this 3rd part.

Finally, the Commission will carry out audit and control actions of AI systems. Augmented cameras were already among its 2023 control priorities. It will also examine the uses made of AI to fight against fraud, including social insurance fraud.

The Cnil will also carry out audits and checks as part of the investigation of complaints filed with its services. The goal is to clarify the rules for training and using generative AIs. A control procedure was opened following the complaints, aimed in particular at OpenAI.

For the French agency, this action aims to monitor compliance by AI players with their obligations. This includes carrying out an impact assessment (DPIA), informing people and putting in place measures to exercise rights.

“Thanks to this collective and essential work, the CNIL wishes to establish clear rules that protect the personal data of European citizens in order to contribute to the development of AI systems that respect privacy,” she concludes.



Source link -97