Self-driving cars spot these two types of pedestrians much less well, study finds


A study conducted on eight pedestrian detection systems used in self-driving cars highlights a significant bias on two types of people in particular. The training of the AI ​​is in question.

Credit: 123RF

Hands-free driving is already a reality. In France, it has been authorized for a year already. This does not mean that it is enough to get a car equipped with such a system, start it and take a nap during the journey. Autonomous vehicles are far from safe, even some manufacturers admit. Regularly, we learn of the occurrence of more or less serious accidents of which they are the cause.

And it is not this recent study that will improve things. Researchers from King’s College London (KCL) tested 8 pedestrian detection systems among the most commonly used. After asking them toanalyze more than 8000 photographs of pedestriansthey draw rather alarming conclusions. Not all people are recognized with the same effectiveness. Two categories of pedestrians are particularly affected by detection errors.

Self-driving cars have a harder time spotting children and dark-skinned people

The study shows that children are much less well recognized by self-driving cars than adults. They spot adults with 20% better efficiency to children. Other pedestrians likely to be misdetected are those with dark skin. The systems are thus 7.5% more effective in spotting a light-skinned person. The reason is very simple for Dr. Jie Zhang, Professor of Computer Science at KCL: the learning of the artificial intelligence that manages the detection is biased.

Also read – Self-driving cars create huge traffic jam due to fog

According to him, “the open source image galleries used to train these systems […] are not representative of all pedestrians and are geared towards lighter-skinned adults. With less data to train on, the AI ​​becomes less accurate when detecting underrepresented groups.” Same for children. The study also shows that the lower the light conditions, the greater the detection errors in both categories.

The solution is therefore rather easy to implement on paper: “manufacturers must strive to ensure that their AI systems are fair and representative”. For the professor, this requires political impetus and stricter regulations around equity in artificial intelligence.

Source: The Next Web



Source link -101