Generative AI in the medical sector: a threat to the poorest countries? WHO warns us


Camille Coirault

January 20, 2024 at 2:27 p.m.

0

Medical AI © © PopTika / Shutterstock

At the center of concerns: the spread of erroneous information © PopTika / Shutterstock

The World Health Organization (WHO) is sounding the alarm. AI may pose a risk to lower-income countries as it gains traction in the medical sector. Regulation by political decision-makers is becoming more and more urgent.

In a recent report entitled Ethics and governance of
artificial intelligence for health
, the WHO has spoken about LLM (Large Languages ​​Models). This is the technical term for generative AI systems programmed to process natural language, such as Google Bard or ChatGPT. If the development of these emerging technologies is left in the hands of rich countries and companies, they could negatively impact low-income countries, particularly in the field of health.

If data from the least resourced populations is not used to train these models, this could induce bias in the algorithm. Unequal access to these new technologies is also an aggravating factor.

Potential inequalities and biases

During a press conference, Alain Labrique, director of digital and innovation at the WHO, said: “ the last thing we want to see happen with this technological advancement is the spread or amplification of inequality and bias in the social fabric of countries around the world “. A concern that is in line with one of the latest IMF reports concerning the impact of AI on wage inequalities. With the explosive popularity of consumer LLMs, the WHO has already had to update its AI guidelines, less than three years after they were first published in 2021.

The WHO emphasizes that LLMs have been “ adopted faster than any other consumer app in history “. Capable of producing images, text or videos, it is increasingly common to see the health sector take advantage of it. Our era is a bit like the Wild West of AI; the WHO fears a phenomenon of a race to the bottom, where a large number of companies, squeezed by competition, would rush to develop poor quality AI models. Another risk mentioned is that of model collapse, a great cycle of misinformation in which LLMs trained on distorted data would propagate erroneous information by polluting public information sources.

Another aspect to consider is that of inequality of access to LLMs. The WHO explains on page 21 of its report: “ the digital divide […] limits the use of digital tools to certain countries, regions or segments of the population. This divide leads to other inequalities, which affect the use of AI, and AI itself can reinforce and worsen these disparities “.

The WHO also fears that the poorest populations will use free LLMs to obtain information as a substitute for real health professionals, whose services only the richest individuals could afford. A major problem being that most of these models are programmed in English. “ So while they can receive input and provide output in other languages, they are more likely to generate false or erroneous information » explains the organization.

Doctor carrying a tablet © © PeopleImages.com - Yuri A / Shutterstock

The private sector’s monopoly on AI-based technologies is rightly worrying © PeopleImages.com – Yuri A / Shutterstock

Regulations and training as the only safety lines?

The WHO emphasizes the importance of establishing a real regulatory framework to regulate the development and use of AI-based technologies. Similar to the EU Artificial Intelligence Act, “ governments of all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies » says Labrique. Another WHO recommendation is to increase the participation of civil society in the development of these language models. For the moment, the development of AI tools like LLMs is far from participatory.

The WHO also warns against “ industrial capture » of the development of LLMs. Indeed, given the very high costs of their development and maintenance, most tech giants have already outstripped governments and universities in AI research. “ An unprecedented number » of doctoral students are already leaving the academic environment to work in the private sector. Another important point that the WHO develops in its report: the need to offer LLM developers ethical training, like that of doctors. A sort of “AI Hippocratic Oath”, which would guarantee more responsible conduct in the creation and applications of these models.

If AI applied to the medical field has tremendous potential, we must not deny the risks it represents. Nothing new under the sun, since these risks concern more the countries less endowed with resources, hence the urgent nature of the situation. As long as the private sector walks at the head of the herd and avoids collaboration with political institutions, this sword of Damocles will remain hanging in the air.

Sources: Nature, World Health Organization



Source link -99