“The question is not that of an artificial intelligence which replaces the expert but which assists him”

Tribune. The public is afraid of artificial intelligence (AI) because it could reproduce real-world discrimination more effectively: Amazon is remembered for trying to filter job applications using AI , only to end up with white men at the interviews. If AI is developed with data that suffers from such biases, it’s no wonder that it reproduces it better!

However, if we “clean up” the data that is used for AI, there will still be some way to go. This is the thesis of the National Institute of Standards and Technology (NIST), the American standardization agency, in its report “ A Proposal for Identifying and Managing Bias in Artificial Intelligence ”, subject to consultation, as part of the plan by Joe Biden, which aims to encourage responsible and innovative use of AI.

The bias of “solutionism”

The first risk of bias is to imagine the application of AI based on the data that already exists. Even if they are free from bias, they will influence the question to be solved by the AI. It is better to think about the problem to be solved independently, and too bad if there is not yet data for the training, it will have to be created. We do not proceed otherwise in science: we generate in the laboratory the measurements necessary for the experiment that we imagined before and not the reverse (and if this is not possible, as for climate studies, we resort to numerical simulations).

Archives: Against digital “solutionism”

Between the data that we are able to collect and that of the real world, there may be a gap: this is the example of online questionnaires that we quickly judge to be representative when they obviously ignore all those who do not respond.

It is believed that an AI application once deployed has undergone a testing phase that validates it. However, with Covid-19, according to NIST, AI applications were urgently released, telling themselves that their use in production would also serve as a test. These examples show the biases introduced not by the data but by a loose AI development cycle.

Second, AI suffers from the bias of “solutionism”: it will provide solutions to everything. In this case, we are fishing out of optimism, we imagine without limit the potential of AI. It is therefore impossible to involve risk management, which would like to set reasonable limits on its use. They are spoilers.

An ethics committee

You have 57.89% of this article to read. The rest is for subscribers only.