Widespread hypocrisy on AI, between ChatGPT and facial recognition


Since ChatGPT hit the market, not a week goes by without a good soul telling us how dangerous this tool is. Some don’t even have the modesty of a violet and suggest its ban. Recently, the Italian CNIL banned the tool from being accessible in Italy. In France, a priori, we are not there yet, only a few complaints have been filed. Nevertheless, this generalized hypocrisy is beginning to become frankly wearisome.

Send a message…

In Decrypted Zapping, we love ChatGPT. AI is not a discovery: before its arrival, other content generation tools already existed. They were less efficient and much more expensive. The great thing about ChatGPT — apart from its price — is that it behaves much like a human being. The learning curve is quite easy and you only need to spend a few hours with the tool to get it to perform a few complex tasks. There is a certain satisfaction in taking time to build fairly complex queries, taking into account all the parameters. You can easily spend several evenings in a row there.

Of course, he is wrong a lot and often. You have to use it on a daily basis to understand what it can do and what it can’t do. For scripts, he sometimes goes around in circles. For text, he has difficulty “reading”. Some words are “forbidden” and generate automatic responses. But, for $20 a month, we can consider that we got what we paid for. It is a very good virtual assistant and it happens to be more effective than Google on certain requests. We will not discuss the SEO part: it is extremely efficient, to the point that it becomes almost annoying.

Will it or rather will artificial intelligence revolutionize society in depth, as some people trumpet on television sets? We are still very far from Skynet and no, the current professions are not going to disappear. Some will simply evolve into something else. Do not imagine that it is a magic tool. It is a simple tool, very evolved, but which still has shortcomings, of which the designers and very regular users are well aware.

State surveillance: 404 of the CNIL

Where one wonders is on the pretext of the use of personal data. If you have followed the parliamentary news, it has not escaped you that in the context of the Olympic Games, an experiment based on algorithmic video surveillance will be put in place. This subject has received less ink than ChatGPT and yet, from a privacy point of view, it is still more serious.

Every day, our personal data and our privacy are almost open-dada by state services. Almost anyone can access font files. We are forced to give identity documents online to access our rights. With this fear that it could end up in the public square. Not to mention banks and real estate agencies. A bank can calmly ask you for your last tax notice and a host of personal documents, on the basis of muddy justifications. At no time do you know what they do with it, who will consult it and how it will be stored. Same thing for real estate agencies: you are asked a lot of personal documents.

The difference with ChatGPT? No one is forcing you to use ChatGPT when you are required to have a bank account, hosting, and the state stores a huge amount of citizen information. During this time, the CNIL is absent subscribers. It is an understatement to say that we have made an extraordinary cinema to extol the benefits of the GDPR. We would like the text to also serve to frame the State and not just three marketers. In France, the entirety of our life is freely accessible for state agents and similar (and above all, more and more contract workers).

A Pôle Emploi agent can ask you for your bank account statements to be sure that your aunt has not left you some money. A civil registrar may decide that no, you are not called as you are. Local authorities can flower the streets with surveillance cameras on all street corners and between corners. If your name appears in the media, your arrests, your fines, your hospitalizations will be learnedly told. If you doubt it, look at the case of Pierre Palmade.

Misleading images

It cannot be said that there are no serious breaches of privacy, quite the contrary. But, ChatGPT is attacked with a bazooka when it is a mosquito compared to all that citizens have to suffer from the State. The only real short-term “danger” with ChatGPT and co. is image generation.

Smart guys have used artificial intelligence to generate news images, which have circulated a lot on social networks. Problem: it wasn’t advertised as AI-generated images. On this point, there is a risk for the information. Nevertheless, this can be an opportunity for the media. Indeed, this should encourage them to hire only photo-reporters and to highlight the veracity of their images. They should also take the trick of systematically crediting the photos they use, directly on the photos, as AFP or Alamy do.

From a technical point of view, there is progress to be made in order to “certainly” detect an image generated by artificial intelligence. Some tools, accessible to the general public, already exist in the form of a browser extension, but you have to learn to “read” a photo and even so, you are absolutely not certain that the image is not faked. or modified or generated by artificial intelligence.

In reality, apart from the question of photos, which is quite specific and which can be resolved by adopting good practices within newsrooms, there is no ChatGPT or artificial intelligence “subject” at this stage. Perhaps we are simply witnessing a reaction of rejection on the part of well-established people, who first fear for their own social and professional positions, under the guise of worrying about the well-being of individuals.





Source link -97