A battle for artificial intelligence is being played out in the AI ​​Act


The European Union is expected to be more lenient on open source AI models, compared to their proprietary counterparts. In any case, this is what emerges from a leaked working document on the future regulation of the AI ​​Act.

Open source artificial intelligence should benefit from a more favorable legislative framework than proprietary AI, at least at European level. In any case, this is what emerges from the ongoing discussions about the future regulation on AI (AI Act). A version of the text dated January 21, 2024 has leaked, giving indications on the state of the negotiations.

We learn in this 892-page long document that “ the obligations provided for by this regulation do not apply to AI systems distributed under an open and free license “. An exemption, however, with some exceptions. The AI ​​Act will apply to open source models in cases where:

  • they are placed on the market or put into service as a high-risk AI system;
  • they fall under titles II (prohibited practices in matters of AI) or IV (transparency obligations for certain AI systems).
The European Parliament.  // Source: Frederic Koberl
The European Parliament. // Source: Frederic Koberl

These provisions, noticed on A justified action “ given the massive public attention “, he wrote on X on January 21.

The proposed AI regulation, unveiled by the European Commission on April 21, 2021, did not initially contain an explicit provision on open source. These latest advances are, however, subject to change, as negotiations are still continuing. The text is supposed to come into force in spring 2024.

Regulation by the degree of risk

The entire architecture of the AI ​​Act is based on an approach consisting of regulation by risk. Depending on the degree of “dangerousness” of the AI ​​system, more or less strong provisions are planned. Four thresholds are provided: minimal or zero, limited, high and unacceptable. A system rated at the unacceptable level will be banned in the EU.

In the case of open source, therefore, we understand that AI models classified in the first two levels will benefit from a more flexible framework than closed programs. From the third level, however, these exceptions cease. Current regulation regains its rights, with certain transparency obligations, among others.

The AI ​​Act provides for four levels of risk: minimal, limited, high and unacceptable

Open source by nature constitutes a criterion of transparency, since this has the effect of exposing the content of the artificial intelligence model for everyone to see. On the other hand, an open source AI model does not necessarily mean that its training with data followed this same path. Requirements remain in this area.

On paper, open and freely reusable projects like Llama 2 from Meta (the parent company of Facebook) or the tools of the French startup Mistral AI (which released an 87 GB language model in December via BitTorrent) appear favored by compared to closed solutions, like OpenAI (ChatGPT, Dall-E 3).

Regulatory debates around open and closed source models reflect the ideological tensions of the booming sector. Two opposing visions on how to create AI and manage models. According to studies on this subject, proprietary systems are still ahead of their rivals, but the gap is closing more and more.

For further

The European Union // Source: CanvaThe European Union // Source: Canva


If you liked this article, you will like the following: don’t miss them by subscribing to Numerama on Google News.





Source link -100