Skip to main content

The GDPR does apply to AI in healthcare. However, data protection regulations do not resolve certain issues such as bias.

By 8 November 2023January 16th, 2024No Comments

Interview of Cécile Crichton, PhD student at the Institut droit et santé of Université Paris Cité and lecturer for the Master in Artificial Intelligence Law at the Institut Catholique de Paris. Cécile Crichton specialises in AI law and personal data protection and occasionally writes articles for Dalloz Actualité.

1. The French Bioethics Law, as revised in 2021, is the only text that provides a framework for the use of AI in health in France. What does this law provide for and what are its limitations?

The bioethics law took a long time to be adopted in view of certain debates, notably on surrogacy. Artificial intelligence was left on the side a little because the thinking on the subject was not yet complete. At the time, we were wondering whether all algorithms should be included in the definition of artificial intelligence, or whether we should confine ourselves to automatic learning. In the end, the Bioethics Act only inserted a general provision into the Public Health Code, which therefore applies to all situations: article L. 4001-3. The main purpose of this provision is to ensure greater transparency for medical devices incorporating a treatment whose learning was done using a massive data set, in other words, a machine learning system.

The scope of the text remains very limited, as it excludes issues relating to insurance, civil liability, doctor’s liability or the certification of medical devices. These issues were completely sidestepped by the French Bioethics Law. This is not necessarily a bad thing; it’s a way of being cautious, given that the use of artificial intelligence is, after all, fairly recent. So no, the law on bioethics is not enough, because it does not address a certain number of issues. However, it does lay the foundations for better control and the beginnings of regulation.

The law may be seen as incomplete in some respects, but there are a huge number of other texts dealing with certain issues, such as patient consent, which are applicable to all situations, even those involving AI.

2. Are there other legislative dispositions that can fill these gaps, such as the European General Data Protection Regulation (GDPR), for instance?

The GDPR does indeed apply to AI in healthcare. However, data protection regulations do not resolve certain issues such as bias. Biases are often linked to the under-representation of certain populations in datasets. The GDPR requires data to be accurate and kept up to date when it is processed. But data accuracy is not the same as representativeness. For example, if an AI system is trained with data from the Chinese population, the Chinese system will be ineffective in France. It doesn’t matter how accurate the data is. So there are still gaps.

In terms of liability, I believe there is still a long way to go. But all these issues are under discussion, particularly at the European level with the proposed AI Act, as well as a directive on liability for AI and a possible adaptation of the directive on defective products for artificial intelligence.

In France, it will be a question of adapting the pre-existing rules: liability for damage caused by things, vicarious liability, liability due to defective products, etc. The WHO is right to call for clarification of the rules on liability, since we still don’t have a clear answer. For the moment, we have to wait for case law, which creates legal uncertainty. This, for example, is what worries doctors enormously, who have to rely on a machine without necessarily understanding it, but also patients, who don’t really understand AI.

3. As you said, discussions are underway at European level and many questions will certainly be answered by the AI Act, the draft European regulation on artificial intelligence. How will this regulation be applied in France? How will it affect AI developers?

This text is a regulation, directly applicable in our legal system. The AI Act lays down rules for high-risk AI systems, including AI systems embedded in medical devices subject to the CE certification in Europe.

For these devices, manufacturers will have to comply in the same way as for CE marking. Member States will therefore have to appoint oversight bodies to ensure that companies comply with the regulation. As it stands, the text gives Member States the choice of either creating a body specifically for AI, or giving these powers to a pre-existing body.

In France, the CNIL has already shown its interest in taking on such a role, and is doing everything in its power to be appointed as AI oversight body. It has, for example, set up a department dedicated to artificial intelligence and has published several papers demonstrating that its original remit is already in line with such issues. It is likely to be designated as a competent authority.

The AI Act proposes a “compliance regime”, as for the RGPD. This means that it will be up to the creator of the artificial intelligence system to ensure that its system complies with the regulation. Developers will, for example, have to produce a certain number of documents to prove their compliance, and will be obliged to provide these documents to the supervisory authority if requested.

4. The development of artificial intelligence in the health sector also raises the question of the storage of massive health data sets, which are used, among other things, to train algorithms. In France and Europe, the storage of health data is an issue to which public authorities are trying to respond in order to make the data more available for research purposes. What are the implications for the protection of personal data?

As the European health data area is still at the planning stage, I cannot comment on this. However, from the point of view of data protection in France, certain aspects need to be reviewed. The Health Data hub project, hosted by Microsoft, is a perfect example. The challenge for research is accessing data so that machine learning models can be trained. It’s true that the GDPR is restrictive in this regard. However, there are exceptions for research that allow exemptions from a number of obligations.

It is important to focus on data protection and, in particular, cyber security risks, as hospitals are primary targets for attacks due to the high value of healthcare data. Even if the GDPR applies and measures exist to protect against cyber attacks, there is no such thing as zero risk.

Besides, companies are now trying to develop data storage models that comply with the GDPR. Solutions therefore exist to ensure that data is accessible while protecting individuals. Simple procedures can be applied, such as putting in place cybersecurity measures and ensuring that the people concerned have given their consent. Health data is so important that it requires at least these basic guarantees.

The only way to guard against the risks associated with the large-scale storage of health data is to prohibit certain practices that are deemed to pose an unacceptable risk. This could be the case for the collection and storage of genetic data, for example.