State Secretary Brantner: “Made in Germany” can become a seal of quality for AI

With the new AI regulation, the EU has created a comprehensive set of rules for the use of artificial intelligence. Critics from the business community say it is too comprehensive and bureaucratic. Franziska Brantner, who helped negotiate the law for Germany, is certain that these rules will help Europe leverage its locational advantages for this new industry.

After years of negotiations, the EU’s AI regulation is about to be finally adopted. Europe is thus creating a comprehensive set of rules for the use of artificial intelligence. Critics from the business community say it is too comprehensive and bureaucratic. Franziska Brantner, who helped negotiate the law for Germany as Parliamentary State Secretary to the Minister for Economic Affairs, is certain that these rules will help Europe leverage its locational advantages for this new industry. The new rules cover 900 pages. Big enough to kill the emerging AI industry in Europe?

Franziska Brantner: On the one hand, the AI ​​regulation creates legal certainty to enable the innovations inherent in this great technology and at the same time minimizes the risks that exist in its application. This risk-based approach is of course complex. If you were to make it easy and generally ban AI or allow everything, then you would need half a page. I think it’s right that we take a more differentiated approach here.

We now have this complex set of rules that still entails bureaucracy – for example the development of norms and standards. Other locations, especially the USA, do not have this to this extent. It will be more attractive for companies to develop this future technology there.

This really shouldn’t happen. That’s why it’s so important that we proceed in a pragmatic and uncomplicated manner when it comes to implementation; we in the federal government are already in close contact with companies. In fact, the U.S. national approach is more similar to ours than you might think. In addition, there is a wealth of different regional regulations in the United States. We in Europe rely on a common, uniform framework. Our opportunity is to create a large market. Common norms and standards that companies are now developing themselves based on the new law can be our competitive advantage. Standards have also been Germany’s export hit for over 100 years because it is known that you can rely on the quality “Made in Germany”. For AI, too, if we develop these norms and standards cleverly and quickly, “Made in Germany” can become a trustworthy seal of quality that is valued worldwide.

That’s a great vision. But is that realistic? Can Europe still catch up with the large American technology companies when it comes to artificial intelligence?

AI is much more than the large language models like ChatGPT, where we have strong European competitors, but the market power lies elsewhere. With our leading machine manufacturers in Germany and Europe, we have great potential, especially when it comes to industrial AI applications. If we combine their know-how and production data well with artificial intelligence, the most competitive, smart and efficient machines can be created. This is an enormous location advantage.

Artificial intelligence is not just about a promising industry. It is also about technological sovereignty in a technology that will influence more and more areas of life and brings risks with it. A lot of it is about generative AI, which can generate texts, images or videos and thus also produce false information and hate speech. With social networks, we Europeans have lost some of this technological sovereignty and have only limited options for influence. Can we still prevent this with AI?

First of all, I would like to once again make clear the enormous potential that lies in this technology. Just one example: I come from Heidelberg and we have the German Cancer Research Center there. It’s unbelievable what’s being developed there – the clever use of data and AI can save people’s lives. At the same time, of course, there are risks and we in Europe must be able to manage or limit these risks. On the one hand, we can achieve this through more technologies “Made in Europe”. At the same time, it is a regulatory task, such as the ban on the so-called “social scoring” that is widespread in China, and a question of consistent law enforcement.

Hasn’t the train left for this?

The monopoly market power of the large non-European platforms, which can be further cemented through AI applications, is of course a challenge. At the same time, we as the EU have now created instruments to combat hate and misinformation. The Digital Services Act enables the EU Commission to take action against large platforms such as Telegram, TikTok or X, formerly Twitter, if they do not comply with the law. The AI ​​regulation also imposes an obligation to be transparent about what was generated with AI. If AI content is played out, it must be recognizable, especially on social platforms. This is important in order to be able to distinguish between truth and lies in the digital space and gives users more self-determination over their information behavior. The AI ​​regulation ultimately strengthens our democracy.

Enshrining this in law is one thing, enforcing it is another. Even before AI, countering disinformation campaigns was already difficult.

Until now we have had no European legal basis to address this problem and I am very happy that we now have one. What is now important is consistent implementation; the ball for this lies primarily with the EU Commission. Proceedings have just begun, for example against Tiktok and Meta, and the penalties can be quite significant. By the way, with the AI ​​regulation we now also have an instrument to protect intellectual property, without which we in Europe would be pretty poor. The copyright of artists, journalists and researchers, for example, must also apply in the age of AI. If their texts, works or data are used to train AI models, this must also benefit them. For me, that’s part of it when we talk about security and sovereignty.

This also raises the question: Will an artist, a writer or even a German medium-sized company be able to enforce corresponding claims against the technology giants that dominate the industry?

That is why the AI ​​regulation not only defines the rights of intellectual property owners. It also includes transparency obligations and options for complaints and legal action. In addition, there will be an AI Office at the European level as a central actor with a systematic view of and action against the problems with the providers.

OpenAI and other large providers of text, image and video AI have not yet disclosed which data they used to train their models, making it very difficult to even recognize copyright claims. Should that change with the AI ​​regulation?

Yes, companies will be required to disclose these records to customers. We have explicitly regulated this in the law.

Do you accept if companies choose not to offer certain, perhaps particularly powerful AI models in Europe because they do not want to disclose their data?

Let’s wait and see for now. The AI ​​regulation creates appreciation for intellectual property. We should work to preserve this.

Max Borowski spoke to Franziska Brantner

source site-32