Artificial intelligence reflects the world around us, which is a challenge


Developers and data scientists are human, of course, but the systems they create aren’t – they’re just coded reflections of the human reasoning that drives them. Ensuring that artificial intelligence (AI) systems deliver fair and unbiased results, while ensuring the right business decisions, requires a holistic approach involving the majority of the business. We cannot – and should not – expect IT staff and data scientists to act alone when it comes to artificial intelligence.

There is a growing desire to extend artificial intelligence beyond the test beds and limits of system development to integrate it into the business world. For example, at a recent AI Summit panel, held in New York City in December 2021, panelists agreed that business leaders and managers should not only question the quality of decisions made. by AI, but also get more actively involved in their formulation.

So, how do you remedy any prejudices or inaccuracies? It is clear that this is a challenge that must be taken up by all the managers of the company. IT, which so far has carried most of the weight of AI, cannot do it alone. Industry experts advocate opening up AI development to more human engagement. “Putting the burden on IT executives and staff is to mistakenly generalize a set of important company-wide ethical, legal and reputation issues for a technical issue,” says Reid Blackman, CEO of Virtue and advisor of Bizconnect. “Bias in AI is not just a technical problem; they are nested in all the departments. “

Fight against bias

To date, not enough has been done to combat AI bias, says Reid Blackman. “Despite the attention paid to biased algorithms, efforts to address this problem have been fairly minimal. And removing bias and inaccuracies in AI takes time. “Most organizations understand that the success of AI depends on establishing a relationship of trust with the end users of these systems, which ultimately requires fair and unbiased AI algorithms,” says meanwhile Peter Oggel, CTO and senior vice president of technology operations at Irdeto.

More needs to be done beyond the boundaries of data centers or analyst sites. “Data scientists don’t have the training, experience, and business needs to determine which of the mismatched fairness metrics is appropriate,” says Reid Blackman. “In addition, they often do not have the clout to voice their concerns to relevant senior officials or subject matter experts. “

It’s time to do more “to look at those results not only when a product is live, but also during testing and after any major projects,” said Patrick Finn, president and CEO of the Americas at Blue Prism. “They also need to train technical and sales staff on how to reduce stigma within AI and their human teams, so that they can empower themselves to participate in improving the use of AI.” in their organization. It is both a top-down and a bottom-up effort, fueled by human ingenuity: to remove obvious biases so that the AI ​​does not integrate them and, therefore, does not slow down the work or make it worse. someone’s results. Those who don’t think about AI fairly aren’t using it in the right way. “

Define the concept of equity

To solve this challenge, “you have to go beyond validating AI systems against a few parameters,” explains Peter Oggel. “If you think about it, how do you define the concept of fairness? A given problem can have multiple points of view, each with a different definition of what is considered fair. Technically, it’s possible to calculate metrics for datasets and algorithms that say something about fairness, but what should that be measured against? “

There needs to be more investment “in researching bias and understanding how to remove it from AI systems.” The results of this research should be fed into a framework of standards, policies, guidelines and best practices that organizations can follow. Without clear answers to these questions and more, business efforts to eliminate bias will be in vain, ”concludes Peter Oggle.

AI-related biases are often “unintentional and subconscious,” he adds. “Raising awareness of the issue will go some way to addressing prejudice, but it’s equally important to ensure the diversity of your data science and engineering teams, to provide clear policies and to ensure consistency. adequate supervision. “

Shorter-term measures

While opening up projects and priorities to the business takes time, there are shorter-term actions that can be taken at the development and implementation level. Harish Doddi, CEO of Datatron, advises asking the following questions when developing AI models:

  • How were the previous versions?
  • What are the input variables in the model?
  • What are the output variables?
  • Who has access to the model?
  • Has there been unauthorized access?
  • How does the model behave with respect to certain parameters?

During development, “machine learning models are tied to certain assumptions, rules and expectations” that can give different results once put into production, says Harish Doddi. “This is where governance is essential. Part of this governance is a catalog to keep track of all versions of the models. “The catalog must be able to keep track and document the framework in which the models are developed, as well as their lineage. “

Businesses “need to better ensure that business considerations do not trump ethical considerations. It’s not an easy balancing act, ”says Peter Oggle. “Some approaches consist of automatically monitoring the evolution of model behavior over time over a fixed set of prototypical data points. This makes it possible to verify that the models behave in the expected manner and respect certain constraints linked to common sense and known risks of bias. Additionally, performing regular manual checks of sample data to see how a model’s predictions align with what we expect or hope to achieve can help spot emerging and unexpected issues. “

Source: ZDNet.com





Source link -97