Artificial intelligence (AI) is permeating all parts of society and health is not left outraising more and more ethical questions.
But, as the National Pilot Committee for Digital Ethics (CNPED) and the National Consultative Ethics Committee for Life and Health Sciences (CCNE) point out, AI systems applied to medical diagnosis “produce results based on the one hand on probabilistic approaches and on the other hand they can be marred by errors”. So that some safeguards are necessary.
These systems can now be used in the field of medical imaging and screening to detect cancer and illnesses, for example, with sometimes impressive reliability — in addition to the time savings they allow, even though the hospital world is experiencing a scarcity of resources and manpower.
In a report made public on Tuesday January 10, upon referral by the Prime Minister in 2019, the two colleges of experts believe that if “healthcare teams and patients should not deprive themselves of the advantages provided by these tools”, they must give themselves “constantly the means to take distance with the result provided”.
Human control and explainability
Their opinion lays down two essential conditions for the integration of these devices into our healthcare system: they must always “to be subject to human control”, even if it “will not necessarily remove the uncertainties”, and their results must be “explainable”.
The CNPED and the CCNE thus draw up a list of 16 recommendations to reconcile the opportunities promised by AI for patients and the risks posed by the technology.
First of all, they unsurprisingly call for continued research, in particular through clinical trials to assess the risk-benefit ratio, and to improve these tools without relying exclusively on them for the health of tomorrow. The methods already established for establishing a medical diagnosis should thus continue to be taught. Where appropriate, they note the importance of training healthcare professionals in AI, taking into account its ethical issues.
Trust and transparency, they say, must also be an integral part of our approach to AI. They thus suggest that the medical report of a consultation mentions the use of AI if necessary, that the developers provide a “some level of explainability” when they put a tool on the market and they were their performance and their limits “in understandable terms”. Or that the “justifications supporting the promises” of these devices, but also the “negative search results”.
No technological solutionism
Finally, the two advisory committees call for vigilance regarding the development and use of these tools. They “must be considered as complementary tools to the responses to be provided to the shortcomings of the healthcare system, in particular medical desertification, but must not be considered as substitute solutions for medical teams”, notes their opinion, which insists on the fact that economic considerations should not take precedence.
Particular attention must also be paid to the risks of surveillance and anti-selection carried out by insurance companies, as well as to data security issues.
Selected for you
We would love to say thanks to the author of this write-up for this incredible material
Report Highlights the Importance of “Human Control” in the Use of AI in Healthcare
Find here our social media profiles and other related pageshttps://yaroos.com/related-pages/