Microsoft will stop guessing your emotions with its controversial AI


Microsoft changes its mind about facial recognition. After trying for years to train an AI to recognize human emotions, the company is ending its project, deemed too unreliable and prone to abuse.

Have you ever heard of the Oxford project? Nope ? This no longer really matters since Microsoft will put an end to this experiment which was intended to let artificial intelligence detect and analyze human emotions. Presented in 2015, this tool wanted to provide developers with facial recognition software capable of sticking labels to our facial expressions.

The AI, hitherto accessible via Microsoft’s Azure service, will be disconnected in accordance with the new ethical principles enacted by Microsoft. The Redmond company is far from the first to question the use of facial recognition. Amazon has thus limited the use of the Rekognition algorithm for several years, as has Facebook, which has abandoned its efforts to identify people in photos and videos.

Identification of emotions is unreliable

After extolling this “technological advance”, more and more firms are coming back in their footsteps, the controversies surrounding the very controversial company Clearview AI are proof of this. In Microsoft’s case, the emotion identification algorithm had a lot of blind spots.

First, as Microsoft executive Nastasha Crampton explains, because“there are huge cultural, geographic and individual variations in how we express ourselves”. A smile in one corner of the world does not necessarily mean the same thing 20,000 km away. An ironic or yellow smile even expresses the opposite of what the machine thinks it detects.

Advertising, your content continues below

Biases in the analysis

In an article published in 2019, the prestigious MIT explained that “emotion recognition is expensive and requires the collection of a lot of extremely specific data — more than anyone has today”. The climate around facial recognition not being very positive, collecting even more data would not be terribly well seen. AIs and the algorithms that compose them are also subject to racist and sexist biases that can be problematic in the case of large-scale use. In 2020, the Cnil was also concerned about these excesses. Microsoft’s AI also classified subjects into two binary genders, male and female, “what is not in accordance with our values”added Natasha Crampton.

For all these reasons (and a few more), Microsoft has therefore terminated this program and its old customers will have until June 2023 to find an alternative. The tool will only continue to exist within the Seeing AI application, which helps visually impaired people understand their environment through the image perceived by a phone’s camera. If we can rejoice that such tools are no longer so widely available to the general public, Microsoft, like Amazon and Facebook, has nevertheless widely used its AI for many years and must now have a beautiful database. We will have to trust the multinational not to do anything with it.

Advertising, your content continues below

Advertising, your content continues below



Source link -98