LGenerative artificial intelligence (AI) systems will increasingly be at the heart of our economy and our societies. They must be explicitly regulated by the European AI regulation at a time when a rat race, and in many ways reckless, is unleashed. France and Europe can take their full place in the deployment and economy of AI, provided they capitalize on their strengths: the protection of fundamental rights, cutting-edge industry and trusted AI.
Generative AI systems are developed in two main phases: first the development of a model by learning from very large quantities of data and significant computing capacities, called a foundation model or generative model, then, in a second phase, its implementation in a system. This can be general purpose, like ChatGPT, which produces answers from queries, or adapted to different domains in industrial sectors, after including data specific to them.
Foundation models are notoriously unreliable and unrobust. Even their designers do not understand precisely how they work. However, they are powerful, because they include a phenomenal amount of information, and have thus become the foundation on which a wealth of systems and applications are built.
Placing the regulatory burden almost exclusively on second-stage system vendors, who build and deploy solutions based on foundation models, is neither fair nor desirable. It would even be harmful for French and European industrial innovation. The suppliers of these foundation models, whatever their nationality, must also take their share of responsibility from the design phases.
Risk reduction
However, this is the current challenge of the trilogues, these negotiations which take place between the three bodies of the European Union – the Council, the Commission and the Parliament – to arrive at the final version of the European regulation on AI. Mainly at issue, the position of France, Germany and Italy concerning articles voted in June 2023 by the European Parliament in order to lay the foundations for supervising the development and deployment of foundation models, including the effects could go so far as to threaten our democracies. Indeed, these models are likely to produce false information and carry out unwanted actions, which could lead to a tsunami of misinformation, fraud and cybersecurity accidents in the years to come.
You have 60% of this article left to read. The rest is reserved for subscribers.