By Tom Whittaker, senior associate at law firm Burges Salmon.
Rishi Sunak, the UK’s Prime Minister, has called for greater innovation in public services, including healthcare. A particular focus is on AI and the government has announced funding to roll out AI across the NHS.
At the same time the UK has recognised that the opportunities need to be addressed as well as the risks, planning the first major global summit on AI safety.
HealthTech regulations are one way to both help realise the opportunities of AI whilst managing the risks; “regulate to innovate”.
Here we summarise the key regulation on the horizon and what those in HealthTech can do to navigate those regulations.
The HealthTech industry needs to be aware of the UK’s approach to AI regulation, changes to existing regulation, and, in any event, regulatory intervention into types of AI used by but not specific to HealthTech.
The UK’s approach to AI regulation
The UK’s framework for regulating AI was published in March 2023 (also known as ‘the White Paper’) (see our flowchart to navigate the UK’s position).
• No AI-specific regulator and no AI-specific regulations
• Instead, existing regulators will consider what changes are required to existing regulations (if any)
• The White Paper sets out five principles to guide and inform the responsible development and use of AI in all sectors of the economy: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress. Regulators are expected to publish guidance on how those principles apply within their remit.
The UK takes a context-specific approach, regulating based on the outcomes AI is likely to generate in particular applications (rather than regulating the technology).
What those principles look like in practice will differ between sectors. The White Paper recognises that an AI chatbot in fashion retail requires a different regulatory approach to an AI chatbot in medical diagnosis.
What the principles look like in practice will also differ within each sector.
Take as an example appropriate explainability. Explainability is the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system.
What is appropriate should be proportionate to the risk(s) of an AI system and some consider that an increase in explainability reduces an AI system’s performance.
Those risks will be different for different groups within the healthcare sector.
For example, a patient may prioritise an AI system’s accuracy in diagnosing a serious illness over understanding how it reached its decision.
In contrast, medical staff may want to increase explainability (at a potential decrease to performance) to ensure they understand the AI output in order to integrate factors into their other decision-making processes.
Changes to regulation
Changes to existing regulation relevant to HealthTech are expected in any event. The MHRA has revealed a roadmap for regulatory reform of Software and AI as a medical device.
The roadmap gives Tech and Healthcare companies details of the steps MHRA intends to take to make the UK a more attractive global destination for Software and AI medical devices.
This includes producing clarification of and guidance on existing regulation, for example principles of good machine learning and on effective explainability.
Regulators focus on AI technology used by but not specific to HealthTech
HealthTech providers may look at using large language models (LLMs) as part of generative AI systems, such as that enabled by GPT-4.
For example, generative AI could help address administrative tasks, such as quickly writing-up a patient interaction into clinical records for review and approval by the clinician.
The White Paper confirms that the UK is looking at how (and whether) those LLMs should be regulated.
Individual regulators have set their sights on LLMs.
For example, the Competition and Markets Authority has launched a review into LLMs to consider ‘what are the likely implications of the development of AI foundation models for competition and consumer protection?’
The outcomes may impact how, when, and where HealthTech providers use LLMs.
The HealthTech industry should also be aware that the EU is taking a different approach to the UK on AI regulation.
In short, the EU looks set to:
• Enact AI-specific regulations (see our flowchart for navigating the EU AI Act and post on the AI Liability Directive) and regulators tasked with enforcing those regulations.
• Use a risk-based approach. AI systems which pose unacceptable risks will be prohibited, whilst obligations will be imposed on different actors in the AI value chain for high-risk AI systems.
• Potentially impose other obligations may such as ensuring minimum levels of transparency to persons affected by AI systems, even if an AI system is not considered high-risk.
The EU’s approach is relevant to those operating in the UK. Those looking to sell or deploy their AI systems into the EU market need to comply with the EU’s regulations.
Further, developing AI systems often involves many different parties – including system design, model design, and development, data collection, training and validation – some of those parties may be subject to EU regulations and have to adapt their practices.
In any event, the market may move towards complying with EU regulations which may be seen as the ‘gold standard’ for AI regulations globally. This is one of the EU’s ambitions for its AI regulations.
Many are concerned with understanding whether and to what extent EU regulations may affect them. The AI Act is expected to be passed late 2023/early 2024 with a subsequent transition period.
Many will recall the length of time it took to prepare for compliance with the EU’s data protection regulations (GDPR).
The complexity of AI systems, value chains means and potentially HealthTech business models means that any changes in response to regulation can take a long time to implement.
And there are potentially significant consequences of breaching the EU AI Act. AI systems may have to be withdrawn from the market and there is the potential for fines of up to €40m or 7% of global turnover for the most serious breaches.
Developers and users of AI systems in HealthTech may find it difficult to know what regulations apply to a specific technology. Here are a few things that can help.
1. Identify and consult regulatory resources. UK regulators have launched the Artificial Intelligence and Digital Regulations Service to help developers and adopters of AI and digital technologies in health and social care understand their regulatory obligations.
The service aims to provide a checklist for both Health Tech companies and care providers, to ensure they are doing the right things, in the right order, to meet their obligations under law and in accordance with best practice guidelines.
It has been established to create a central resource collating the relevant legislation, guidelines and best practice guidance, covering the steps from developing products and bringing them to market to ongoing evaluation and monitoring of health and care services.
This should be seen in the wider context of the UK’s AI strategy, including the launch of other useful resources such as the UK AI Standards Hub.
2. Engage early. The UK, EU and many other jurisdictions have made the direction of travel clear – expect AI-specific regulations and/or guidance in the near future (1-3 years).
Understand what AI systems are being used currently are intended for the future in your organisation and by third party suppliers. Also , consider what regulations are coming and what impact they may have.
3. Engage with the relevant stakeholders. AI systems may not fall within one part of an organisation so multiple internal and external stakeholders may need to be part of the preparations.
Consider who is involved in the AI value chain – who internally and which third parties are necessary to the success of your AI systems?