8 ways to reduce AI burnout


Responsible and ethical artificial intelligence has become the hot topic of our time, especially as AI seeps into every aspect of decision-making and automation. According to an IBM survey, 35% of companies today say they use AI in their business, and 42% are exploring the possibility.

The same IBM survey finds that trust is hugely important – four in five respondents say being able to explain how AI arrived at a decision is important to their business.

However, AI is still code – that is, a series of 1s and 0s. It does not convey empathy, and often ignores the context.

It has the potential to provide biased and harmful results. As AI moves up the chain of command – from simple chatbots or predictive maintenance, to executive or medical decision support – you have to know how to balance things out.

In other words, AI developers, implementers, users, and supporters need to be able to explain their reasoning, explain how decisions are made, and adapt accordingly. constantly to new scenarios.

The difficulty of a “responsible” AI

However, implementing responsible AI is not easy. It is synonymous with pressure, especially for the teams responsible for its development. As Melissa Heikkilä points out in the MIT Technology Review, “burnout is becoming increasingly common in responsible AI teams.” Larger organizations have “invested in teams that assess how our lives, societies, and political systems are affected by how those systems are designed, developed, and deployed.” For small and medium-sized enterprises and start-ups, this means that these responsibilities fall on developers, engineers and data scientists.

The result – even in the largest companies – is that “teams working on responsible AI are often left to their own devices,” notes Melissa Heikkilä. “This job can be just as psychologically draining as content moderation. Ultimately, members of these teams may feel undervalued, which can affect their mental health and lead to burnout. »

The speed of AI adoption in recent years has raised the pressure to intense levels. AI has moved from the lab to the production lines “faster than expected in recent years,” says Andy Thurai, a strategic analyst at Constellation Research, who has championed responsible AI. Managing responsible AI “could be particularly exhausting if some have to moderate content, decisions, and data that go against their beliefs, views, opinions, and culture, while trying to maintain a distinction between neutrality and their beliefs. Since AI works 24/7/365, and decisions made by AI are sometimes life-changing events, humans in the loop are expected to keep pace , which can lead to burnout and error-prone decisions”.

The law drags

Laws and governance “have not advanced as fast as AI,” he adds. “Combined with the fact that many companies lack proper procedures and guidelines for ethical AI and its governance, this makes this process even more complicated. »

Add to that potential challenges by courts and legal systems, “which begin to impose heavy penalties and force companies to reverse their decisions”, and you have a climate “particularly stressful for employees trying to enforce rules to AI systems”.

Management support is also lacking, which increases stress. A study of 1,000 executives published by the MIT Sloan Management Review and the Boston Consulting Group confirms this. However, the study finds that while most executives agree that “responsible AI is critical to mitigating technology-related risks – especially on issues of safety, bias, fairness and respect privacy – they recognize that it is not seen as a priority”.

How to reduce the risks

So how do proponents, developers, and AI analysts address potential burnout issues? Here are some ways to alleviate AI-induced stress and burnout:

  • Inform business leaders of the consequences. Unfiltered AI decisions and results risk legal action. “Leaders should view the cost of spending on ethical and responsible AI as a way to improve corporate accountability rather than as a stand-alone expense,” says Andy Thurai. “While spending less money now could improve their bottom line, a single adverse court judgment would overshadow the savings that would accrue from these investments. »
  • Obtain appropriate resources. The stress induced by AI exams is a new phenomenon that requires rethinking the support provided by the company. “A lot of mental health resources at tech companies focus on time management and work-life balance, but more support is needed for people working on emotionally and psychologically challenging topics” , writes Melissa Heikkila.
  • Work closely with the business to ensure responsible AI is a priority. “Any AI must be responsible,” emphasizes Andy Thurai. He cites the MIT-BCG study (mentioned above) which found that only 19% of companies with AI as their top strategic priority are working on responsible AI programs. “That number should be close to 100%,” he says. Managers and employees should be encouraged to engage in holistic decision-making that incorporates ethics, morality and fairness.
  • Ask for help in advance. “Seek help from experts to make ethical AI decisions, rather than AI engineers,” Andy Thurai insists.
  • Keep humans in the loop. Always include back-up strategies in the AI ​​decision process. Be flexible and open to redesigning systems. A survey by SAS, Accenture Applied Intelligence, Intel and Forbes reveals that one in four respondents admits having had to redesign, redesign or disable an AI-based system due to questionable or unsatisfactory results.
  • Automate as much as possible. “AI is very large-scale processing,” recalls Andy Thurai. “The manual process of validation of data quality and validation of results does not work. Businesses need to implement AI or other large-scale solutions to automate the process. »
  • Keep bias out of the data. The data used to train the AI ​​models may contain implicit biases, due to data set limitations. The data that goes into AI systems needs to be well controlled.
  • Validate the algorithms before putting them into production. The data that goes into AI algorithms can change from day to day and these algorithms need to be constantly tested.

“It is easy, in this bipolar and biased world, to label the ethical decisions made by AI as false,” notes Andy Thurai. “Companies should pay close attention to both the transparency of AI decisions and the ethics and governance that are applied. Top-to-bottom dissectable AI and transparency are two important elements. Combine them with regular audits to assess and correct processes. »

Source: ZDNet.com





Source link -97