Artificial intelligence in health, the caution of the ethics committee

The artificial intelligence used for medical diagnosis opens up major prospects, for example by making it possible to identify microlesions that escape the human eye on x-rays, or even by allowing doctors to continuously monitor the biological indicators of patients at home. . But it cannot be used “in a logic of substitution for human intervention”insist the National Ethics Advisory Committee (CCNE) and the National Pilot Committee for Digital Ethics (CNPEN), in a joint opinion made public on Tuesday 10 January.

This had been requested by the Prime Minister in a letter dated July 15, 2019. The theme of artificial intelligence systems for medical diagnosis (SIADM) falling as much within the bioethics than digital ethics, the two bodies conducted a joint reflection, after hearing about ten experts on the subject. In accordance with the ministerial referral, they did not address the question of prevention or treatment, but only that of medical diagnosis. Although the opinion was only made public on January 10, it was adopted on November 23 and 24 respectively by the CNPEN and the CCNE.

A necessary critical look

“A SIADM, if it can reassure by its rigorous and automatic operation, does not plunge less the patient as the medical team in a certain degree of uncertainty”, we read in this document. The algorithm can in fact make errors – both false negatives (a lesion or anomaly escapes it) and false positives (it identifies lesions that are not actually lesions). More broadly, these tools “can be binary when not used critically”.

the “human control at all stages of care” is thus one of the sixteen recommendations issued by this opinion, which also includes seven points of vigilance. Among them, the risk of considering these digital systems as “substitute solutions for medical teams”in a context of scarcity of hospital resources.

L’artificial intelligence indeed promises valuable time savings. By performing repetitive tasks, it could allow doctors to communicate more with their patients, and to deal with more complex situations. But the opinion of the CCNE and the CNPEN considers that the technology should not be “a way to compensate for the deficient organization” of the health system. “The obstacles to access to care cannot be removed by digital tools alone, the appropriation of which by patients is unequal”says the document.

Fourth opinion for the CNPEN

Another danger: the deepening of the distance between doctors more or less familiar with digital technology and between health and artificial intelligence professionals on the one hand, and patients on the other. The opinion therefore recommends promoting a new status: that of digital assistant – or helper – which would help patients better understand the results produced by artificial intelligence.

It was in December 2019 that, at the request of the Prime Minister, the National Consultative Ethics Committee (CCNE) created a pilot digital ethics committee. Composed of around thirty people and directed by Claude Kirchner, Emeritus Research Director of Inria, it has already issued three opinions before this one, relating to conversational agents or the autonomous vehicle.

We wish to give thanks to the author of this article for this amazing content

Artificial intelligence in health, the caution of the ethics committee


Check out our social media profiles as well as other pages that are related to them.https://yaroos.com/related-pages/