Interview of Hannah van Kolfschooten, researcher and lecturer at the Law Centre for Health and Life (LCHL), University of Amsterdam, and consultant on patients’ rights and european regulation of health AI for Health Action International. She is currently teaching and coordinating the course International and European Health Law in the Health Law master’s program at the University of Amsterdam. She is also a PhD fellow at the Amsterdam Institute for Global Health and Development (AIGHD) and affiliated member of the Amsterdam Centre for European Law and Governance (ACELG). The topic of her PhD research is EU regulation of algorithmic decision-making in public health and health care and focuses on safeguarding patients’ rights.
1. Why is it important to regulate artificial intelligence, especially in the health sector
Using AI technologies in healthcare can reinforce deep-rooted systemic and societal patterns of bias and health discrimination in three ways. First, AI systems are often trained with ‘biased’ data, for example underrepresenting certain population groups, containing harmful stereotypes or patterns of discrimination. There are many instances of racial biases exhibited by AI tools in the health sector, e.g., lower performance rates of AI skin cancer diagnosis for people of colour. Moreover, the way AI systems are designed excludes certain population groups from accessing healthcare, such as people with lower digital literacy. Finally, how AI systems are used can be unfair too, for example when only the affluent or large well-funded hospitals can afford the systems when compared to the paucity of use in underfunded facilities. If not regulated, this will impact access to, and quality of care and threaten patients’ rights, especially of already disadvantaged population groups.
2. What is the added value of the European AI Act proposal in regulating AI in health, compared to other existing legislations?
Currently, there is no specific regulation for AI used in health. When AI products qualify as ‘medical devices’ under the Medical Devices Regulation, they have to meet certain safety and quality requirements – but there are no extra requirements regarding transparency or explainability of medical devices using AI. Especially for high-risk systems, the AI Act adds an extra layer of protection to this regime, for example by setting requirements for quality of data. It also introduces some welcome provisions on transparency, for example, registration in an EU database, and the obligation to inform people about the use of emotion recognition (which is used in elderly care). On top of that, the lengthy political discussions around the AI Act seem to have raised more awareness about the risks of AI for fundamental rights – also in healthcare settings.
3. What are the limitations of the AI Act as it is currently formulated?
The main limitation of the AI Act for healthcare is that it is a ‘horizontal’ act (its provisions apply to all sectors) while healthcare is a very specific market. This has consequences for the protection the AI Act offers to patients. To illustrate, the AI Act mainly protects ‘users’ of AI systems, but in healthcare, the insurer, hospital, or healthcare professional is often the one who ‘uses’ the AI system. This leaves the patient – the one who is put at risk – out of the equation. Another limitation of the Commission’s proposal is the lack of a compulsory fundamental rights impact assessment for developers, that specifically considers health rights. Another shortcoming is the limitation of the ‘high-risk’ category to medical devices, while a lot of AI systems used in elderly care or on smartphones do not qualify as medical devices. Other limitations are the exclusion of AI systems used for national security (think about all the body-related systems on airports, or Covid-19 apps). Finally, the proposal of the European Parliament to create the possibility for developers to decide their systems are not high-risk – if endorsed in the negotiations – will severely weaken the protection of the AI Act.