“In the digital world, innovation must also be about ethics, and sometimes that means prohibiting before authorizing”

Philosopher and psychoanalyst, Elsa Godart created the university degree “Ethics and digital – health” at the University of Paris-Est-Créteil-Val-de-Marne (UPEC). She is a researcher at the Hannah-Arendt Interdisciplinary Laboratory for Political Studies (Gustave-Eiffel University) and an associate researcher at the Political Anthropology Laboratory (EHESS-CNRS). She wrote Ethics of sincerity. Survive the Age of Lies (Armand Colin, 2020).

Read also: Article reserved for our subscribers Voices generated by artificial intelligence, a “legal issue”

You would like to create an ethics and digital charter. What purpose ?

This would respond to a major problem of technical-scientific developments. At some point, they escape us and we are unable to apprehend them in their entirety. Let us take a micro-example, but quite emblematic of the problem of deepfakes. Some filters available on the TikTok app, which change a person’s appearance, are completely undetectable. This raises the question of self-construction, especially with young people when they are in the midst of an identity crisis of adolescence. Such applications are put on the market without being tested upstream, without any endorsement and without any ethical framework, unlike scientific discoveries, which must be approved by a committee before being put on the market.

What do you suggest?

A standard should be imposed. Regulation in these areas is absolutely necessary. As soon as a technological innovation touches the field of the social, of the human, it should go before an ethics committee; man and all that is “human” should not be considered as an object of technology. We are in a kind of “ethical vacuum”as formulated by the German philosopher Hans Jonas in his book The Responsibility Principle [1979]. He denounces “the apocalyptic possibilities contained in modern technology” like the H-bomb, which set a precedent; henceforth Man could self-destruct. When we are in a technical-scientific environment that is beyond us, we no longer have the ethical means to think about things and their future, we have to set up new principles, new rules. Innovation must also be that of ethics. And sometimes that means forbidding first before allowing!

Does this also raise the issue of trust?

Yes, and this trust is impossible until we master the object. Innovations like artificial intelligence [IA] are tested, released to the general public, and only then are they analysed. It is extremely anxiety-provoking. Take the example of ChatGPT, which has the crowds worried. There has been no reflection upstream on the impact that such a tool can have. As for mastering it, it was available and used by all before we even tamed it. Not to mention the higher stage towards which we are tending, the empowerment of the AI, which will be self-generating, and there, we will have lost all possibility of control… When ChatGPT will have progressed, when it will be more precise, the question will ask why think for ourselves, why try to write, when the machine can do it like me and even better…

You have 22.8% of this article left to read. The following is for subscribers only.

source site-30