Algorithmic discrimination: how to build trustworthy AI?


According to a KPMG study released last month, three in four people are more willing to trust AI systems when assurance mechanisms are in place.

And this is work that remains to be done, since only two in five respondents believe that current regulations and protections facilitate the adoption of AI. A proportion which reflects public dissatisfaction with AI regulation, according to the same study.

In this context, 71% of respondents expect AI to be regulated and 61% say they are suspicious of AI systems. It remains to be seen what these “insurance” principles are.

“We must first accept that machines make mistakes”

For study respondents, these are “mechanisms” intended to “ensure ethical and responsible use, such as systems for monitoring accuracy and reliability, independent audits of the ethics of the ‘AI and codes of conduct’. One of the areas of concern is the question of bias, that is to say a form of partiality, of AI systems.

“We must first accept that machines make mistakes,” explains Ivana Bartoletti, founder of the Women Leanding in AI Network, and employee of Wipro. She has just co-written a report for the Council of Europe, entitled “Study on the impact of artificial intelligence systems, their potential for promoting equality, including gender equality, and the risks that ‘they can lead to non-discrimination’.

Biases can creep into AI systems in many ways. For example, through data used for training, which may reflect prejudices present in society. And as AI continues to permeate various aspects of human society, the risk of harm and error due to biased decisions increases significantly.

“AI bias is not only linked to issues of discrimination, but also to performance”

“Companies must be aware of this, because AI bias is not only linked to discrimination issues, but also performance issues. An AI with biases, for example, will make you make bad financial decisions.” It provides food for thought on this point. “A distrust of these machines, by their very design, would actually be a good approach,” she says.

Then, to correct these biases, it is essential to understand the development process and the functionalities of AI systems. Lama Nachman, director of the Intelligent Systems Research Lab at Intel Labs, explains that it is essential to include feedback from a wide range of experts in the AI ​​training and learning process.

“We assume that the AI ​​system learns from the domain expert, not the AI ​​developer. And the person who teaches the AI ​​system does not understand how to program an AI system. Yet , the system can automatically build action and dialogue recognition models,” she explains.

“The problem when bias is coded into software”

As a result, biases in AI systems can lead to unfair results, leading to discrimination, but also poor decision-making.

“[Dans ce rapport], we set out to find out what the problem is when biases are coded into software,” explains Ivana Bartoletti. “A lot of people think that people are biased anyway, so who cares? But the problem is that when biases are coded and automated, it’s much harder to identify and challenge them.”

This is what the authors of the report call algorithmic discrimination. “And algorithmic discrimination is much more difficult to combat than other forms of discrimination,” says Ivana Bartoletti. This reinforces the need for effective methodologies to detect and mitigate these biases.

“Data scientists are the first line of defense for implementing responsible AI”

“First of all, data scientists must realize that they are, in most companies, the first line of defense for implementing responsible AI,” says the researcher. “The second line of defense must be privacy experts, data security experts, lawyers and risk management teams.”

“The main problem I see right now is that the second-line roles and the data scientists, i.e. the code architects, are speaking different languages. It’s just that they’re speaking the same language. everyone will be able to understand what fairness, impartiality, transparency and credibility are.”

Perhaps these two populations could attempt a dialogue through organizations working to combat the dangers of AI in business.

Create verifiable rules for AI auditors

ForHumanity is one such non-profit organization. It examines and analyzes the risks associated with AI and autonomous systems. Ryan Carrier, executive director and founder of ForHumanity, tells ZDNET that the organization is made up of volunteers. “ForHumanity has more than 1,600 people from 91 countries around the world, and our headcount is growing by 40 to 60 people per month,” he says.

The ForHumanity community is 100% open and there are no restrictions on who can join. Simply register on the site and agree to a code of conduct. Anyone who volunteers for the nonprofit can participate in the process as much as they want.

One of ForHumanity’s main goals is to create verifiable rules for AI auditors based on recognized laws, standards and best practices. The organization then submits these verifiable rules to governments and regulators.

ForHumanity has applied to the European Union and is close to obtaining an EU-approved certification system for AI and algorithmic systems, which Carrier said would be a first. Enough to allow companies that invest in AI systems to have greater legal certainty.

Implement AI education that promotes fundamental rights, democratic values ​​and the rule of law

The Center for AI and Digital Policy (CAIDP) is another organization that works on AI research and policy. It focuses on delivering AI education that promotes fundamental rights, democratic values ​​and the rule of law.

The CAIDP organizes free seminars. So far, 207 students have graduated from these learning sessions.

Participants are “lawyers, practitioners, researchers, who learn how AI impacts rights and leave with skills and advocacy on how to hold governments accountable and influence change in the sector of AI,” CAIDP President Merve Hickok told ZDNET. Internships last one semester and require a commitment of approximately six hours per week.

Globally, 82% of respondents are aware of AI, but half of them admit to not understanding the technology and how it is used, according to the KPMG study. So there is still a long way to go.



Source link -97