Why our AI models could make robots sexist and racist


Louise Jean

July 15, 2022 at 1:20 p.m.

51

faceid

© cottonbro / Pexels

The racist and misogynistic biases of artificial intelligences are transmitted to the robots that use them. A big problem that could have dangerous behaviors towards women and non-white people.

A new study from Johns Hopkins University has tested a robot and its AI model that appears to perpetuate racist and sexist behavior.

The biases of our artificial intelligences

It is not new that artificial intelligences contain biases coded into their system by the humans who make them. In particular, we noticed the inability of facial recognition to recognize the faces of non-white people to unlock smartphones. But the problem is not limited to this.

For example, police forecasting, or predictive policing, is a crime prevention method that uses technology to identify suspicious activity. In particular, it uses AI-equipped drones that seem to focus on marginalized black communities already exposed to police violence and abuse of justice. AIs that designate who to watch contain racist biases that view black males as more likely to commit a crime.

A systemic problem with deep roots

Now it’s the robots that are showing discriminatory behavior. John Hopkins’ study tested a robot equipped with CLIP, an artificial intelligence pre-trained by OpenAI to recognize objects in a collection of images retrieved from the Internet. These AIs are prefabricated by big companies like Google or Microsoft, and contain the biases bequeathed by their developers and by the content they have been given to learn.

In the case of the study, the robot had to choose the criminal between a white face and a black face, and opted for the latter. Normally, the robot should not have chosen at all, because it should have known that the criminal potential is not read on the face. Moreover, in a third of these scenarios, the robot refused to adopt a discriminating behavior. But, the rest of the time, he complied.

The study concludes that robots act according to toxic stereotypes on a large scale, and that correcting these disparities will not be enough. Indeed, the problem is rooted in a crucial lack of representation of people from minorities in the world of tech, but also in the dissemination of stereotypes on the Web, where robots are learning. We should put AI development on hold and think about our structural policies to allow women and people of color to access positions in AI development, robotics and industry, say researchers tech in general.

Source : Vice



Source link -99