“We must make health data a common good for research”

QWhether in care or in research, and whether it likes it or not, the biomedical world is today immersed in the field of massive health data and their exploitation by artificial intelligence (AI). It is not just about access to more information or more efficient diagnostic aid tools, but about a real revolution in medical and research practices. A new empirical medicine is developing, which is no longer based solely on the experience of the practitioner, but on large numbers of very heterogeneous data and the iterative learning that the machine can do with it. A new generation of AI algorithms will go beyond interpretation by the practitioner – radiological images, histology sections, electrocardiogram waves – to link them to new pathophysiological patterns , until the creation of digital twins.

The experimental approach described in the 19e century by the French physiologist Claude Bernard – observation, hypothesis, experimentation, interpretation, always at the base of biomedical research – sees itself overwhelmed by neural networks and AI approaches, which generate knowledge without a priori assumptions.

But you also have to create an economic model. The question then arises of the nature of the asset created, of the sharing of profits, of intellectual property. Who discovered what? Those who generate the data? The creators of algorithms? The medical researcher? The patients themselves? And what mode of reimbursement for the AI ​​algorithms used in prevention and care?

Read also: Artificial intelligence and medicine: promises and many questions

At the heart of AI in medicine, there is data, this individual information obtained or generated during care, from civil status to imaging results, medical biology, genomics and, the precious data of “real life”: visit to the doctor, drugs bought at the pharmacy, hospitalizations. This apparently banal information must be transformed into massive quality data – structured, organized, qualified and annotated – usable by AI. This first requires guaranteeing the quality of the initial data. It is also necessary to ensure their interoperability and facilitate their reuse by research players, on a European or even global scale. A whole chain of medical, technical and regulatory skills must be put in place.

Finally, a proactive dynamic of data use is needed, more than simple passive collections, but coupled with a human guarantee of AI and its use, a guarantee of its acceptability by all. It’s urgent. It is a frantic global, scientific and economic competition, an issue of national sovereignty in which our solidarity system of health financing can find an even more virtuous justification, each contributing for each other.

You have 46.78% of this article left to read. The following is for subscribers only.

source site-27