AI in France: the CNIL draws red lines


The explosion of generative AI has reignited the debate around artificial intelligence and the risks it could present for the protection of personal data. In this context, the CNIL has put itself in battle order.

In January, the administrative authority established a service dedicated to AI. And in May, it presented its action plan with three main objectives: unite, supervise and audit. Various measures have been taken since then, such as the creation of a sandbox and a support system.

The GDPR reconciles innovation and responsibility

The CNIL continues to implement its strategy through the publication of a first series of guidelines. Their goal: to set a framework for the use of AI that respects personal data. The regulator has already specified that two other series will follow.

With these guidelines, the CNIL provides initial responses to “concerns” raised in recent months. These concerns relate in particular to certain aspects of the GDPR, the European data protection regulation in force since 2019.

“According to some, the principles of finality, minimization, limited conservation and restricted reuse resulting from the GDPR would slow down or even prevent certain research or applications of artificial intelligence,” reports the CNIL.

It responds to these objections, believing on the contrary to be possible to reconcile regulation and developments in AI. But yes, the designers of these systems are presented with conditions and “red lines” that must not be crossed.

Big Data, but without abuse

The GDPR principle of purpose, for example, applies well to AI. In the training phase, the goals can be difficult to list exhaustively. The CNIL admits that all future applications of an algorithm cannot be defined by the operator.

However, this must imperatively specify the “type of system and the main functionalities possible.” Regarding the principle of minimization, the CNIL considers that it does not hinder learning on very large data sets.

The data used must, however, “have been selected to optimize the training of the algorithm while avoiding the use of unnecessary personal data.” Precautions to ensure data security are also “essential.”

What about the retention period of training data? It “can be long if it is justified”, judges the CNIL. Arbitration will be done on a case-by-case basis. Training bases that “require significant scientific and financial investment” may benefit from greater leeway. Ditto for those serving as “standards widely used by the community.”

A regime designed for innovative players

Finally, regarding the reuse of databases. It “is possible in many cases”, notably data publicly accessible on the internet. Be careful though. The user must “verify that the data has not been collected in a manifestly illicit manner and that the purpose of reuse is compatible with the initial collection.”

In addition, underlines the CNIL, the GDPR includes provisions relating to research and innovation. The latter authorize “a regime designed for innovative AI players who use third-party data.”

Clearly, the authority maintains, the development of AI systems is compatible with the issues of protecting privacy. “Moreover, taking this imperative into account will enable the emergence of ethical systems, tools and applications faithful to European values.”



Source link -97