WHO issues first global report on use of AI in healthcare
Artificial Intelligence (AI) applications in health innovation are rapidly increasing, reflecting the potential of these technologies to address the challenges of ageing and workforce shortages. In light of this, the World Health Organization (WHO) recently issued the first global report on AI in health titled “Ethics and governance of artificial intelligence for health”, establishing six guiding principles for its design and use.
The report informs that AI is being used in high-income countries for diagnosis and predictions in R&D, while it also facilitates remote healthcare, data collection, and health promotion, especially in light of the COVID-19 outbreak response. However, as the technology can be abused, the document emphasises that AI systems should be carefully designed to reflect the diversity of socio-economic and healthcare settings. To achieve this objective, the WHO provides six principles as the basis for AI regulation and governance:
- Protecting human autonomy;
- Promoting human well-being and safety and the public interest;
- Ensuring transparency, explainability and intelligibility;
- Fostering responsibility and accountability;
- Ensuring inclusiveness and equity; and
- Promoting AI that is responsive and sustainable.
The Office of the High Commissioner for Human Rights noted that AI and big data can improve the human right to health on the condition that such upcoming technologies are developed in an accountable way. Moreover, these technologies should ensure that certain vulnerable groups have efficient, individualized care, such as for instance assistive devices, built-in environmental applications, and robotics. Despite this, the Office also pointed out that AI applications could dehumanize care, undermine the autonomy and independence of persons, and pose significant risks to patient privacy – all of which are contrary to the right to health. Given the potential risk of discrepancies between west and east, like unequal access to quality data, lack of infrastructure, and legal systems not ready to regulate AI liability, the report recommends that the WHO and partner agencies should seek to establish international norms and legal standards to ensure national accountability and to protect patients from medical errors.
When considering EU efforts to act upon these principles, it is interesting to see that the European Commission appointed 52 representatives from academia, civil society and industry to its High-level Expert Group on Artificial Intelligence and also issued Ethics Guidelines for Trustworthy AI.
The report mentions that many regulatory authorities are preparing considerations and frameworks for the use of AI, and that these should be examined, potentially with the relevant regulatory agency. Moreover, it recommends to the international agencies and professional societies to ensure that their clinical guidelines keep pace with the rapid introduction of AI technologies, while the WHO will provide the relevant support. In this context, national governments should also adopt complex data protection laws in due time.