Health Technologies

Somerset NHS FT publishes AI Policy; covering safe integration, ethics, legal responsibilities, yearly reviews – htn

In a LinkedIn post yesterday, Andy Mayne, chief scientist for data, operational research & artificial intelligence at Somerset NHS Foundation Trust, shared the trust’s final version of its AI policy, focusing on the need for safe integration and an approach balancing innovation with ethical and legal responsibilities.

Ensuring that the document is future-proofed, the policy outlines that whilst at present “even the best and most complex AI models (Siri, Alexa, Chat-GPT, etc.) do not surpass narrow AI…we must ensure protections are in place for future developments”.

Covering staff across the trust, as well as subsidiary company employees, contractors, processes using AI, data for AI, and technical systems using AI; the policy shares a commitment to equality and inclusion in the use of AI technologies with aims to “prevent bias and discrimination in AI systems and to promote fairness and transparency in decision-making processes”.

In this regard, the policy states that AI systems should be designed and implemented in a way that prevents discrimination based on protected characteristics, and “thoroughly tested” to ensure they do not perpetuate bias, with third-party suppliers also required to demonstrate the representativeness of their datasets for the population of Somerset.

The policy also covers equal access to “opportunities provided by AI systems”, transparency in the use of AI systems, the need for new assets to be registered with the trust’s information asset register, and compliance with the DCB 0160 framework for clinical risk management, identifying risks throughout the product’s lifecycle.

When it comes to responsibility and liability, the policy sets out the requirement for “clear accountable lines”, with all AI models having human oversight involved in the decision-making process, and safeguards in place to allow human intervention where a model produces “incorrect, harmful, or misleading information”.

It includes examples, the first where a person employed by Somerset NHS FT has used a large language model on patient identifiable information, noting “the liability is on the employee for the misuse of patient data. In this circumstance the AI model is not at fault”.

Another example, covers when “the organisation used a third-party AI product that was unfairly biased against a protected characteristic”. The policy adds “the third party should have ensured that their training dataset was sufficient in representing a diverse range of patients. If this was not reported to the organisation then the third party is liable, if the organisation was aware and continued to use the product then they are liable.”

On cybersecurity, the document states that all assets integrating “AI generative add-ons” need to be assessed to ensure compliance, whilst the risk of accidental disclosure of information should be mitigated by excluding identifiable information from training datasets.

For purchasing decisions, it cites the need for trusts to “prioritise companies that are transparent with their operational practices”, for procurement to consider that training might be needed to help interpret results, and to ensure results obtained with AI software used in clinical care can be “explained in layperson terms” to support clinicians in relaying technical information to patients.

Setting out plans for monitoring, the policy outlines how all systems using AI need to be “logged and regularly reviewed (yearly)” with an audit of compliance on the Information Asset Register; and transparency and public disclosure should be ensured by publishing details on the trust’s website of any AI system using trust data for its training.

As an appendix, the policy sets out an AI procurement checklist to be completed by software suppliers, covering aspects including AI ownership, purpose, bias and fairness, accountability, privacy, human oversight, explainability, algorithmic transparency, and security and robustness.

Andy also shared a second LinkedIn post, offering a “condensed” version of the AI policy as provided to the trust’s staff “to help them use AI safely”. Offering a brief overview of the key points from the policy around equality, fairness, transparency, security, and approval processes, the document also provides an overview of some of the ways AI is being used at the trust.

A few of those examples include dermatology, where the trust reports using the DERM system to review patient scans, helping to identify cancer and “prioritise patient pathways”; using Microsoft Copilot to automate workflows such as converting paper documents to digital; and using AI for decision support in radiology “to help with identifying conditions in X-ray, CT and MRI scans”.

The trust is also reportedly using AI to help predict admissions, using a model to predict “whether a patient will be admitted when a patient is triaged in ED”; in predicting future activity; and in simulating operational flows using virtual models of hospital systems including ED, inpatients and theatres “to model changes in a safe environment”.

To read the AI policy in full, please click here.

AI in the NHS, wider trend 

A HTN panel discussion from August looked at whether the reality of AI will live up to the current hype and managing bias in healthcare data, covering topics such as what good looks like for responsible AI, ensuring inclusive and equitable AI, and the deployment of AI in the NHS.

A poll over on our LinkedIn page asked followers for their thoughts on the biggest concern for AI in healthcare, with potential options including equitability, bias, transparency, or regulation, with regulation coming out on top. There’s still chance to have your say on this topic – take part in our most recent poll relating to where NHS funding for AI should go in the short term here.

NICE recently launched a new reporting standard designed to help improve the transparency and quality of cost-effectiveness studies of AI technologies, in a move it hopes will “help healthcare decision-makers understand the value of AI-enabled treatments” and offer patients “faster access to the most promising ones”.

In October, we looked at AI use cases in the NHS, including in supporting diagnosis, personalising treatment, predicting disease, and more. We also covered DeepHealth’s acquisition of London-based cancer diagnostic company Kheiron Medical Technologies Limited, as part of efforts to expand its portfolio of AI-powered diagnostic and screening solutions; and University Hospitals Coventry and Warwickshire’s use of AI to improve patient experience.

The launch of HTN’s AI and Data Awards is an opportunity to celebrate how AI technologies are making an impact across health and care, offering a platform to share innovations and projects to help shape future services and systems.

Avatar

admin

About Author

You may also like

Health Technologies

Accelerating Strategies Around Internet of Medical Things Devices

  • December 22, 2022
IoMT Device Integration with the Electronic Health Record Is Growing By their nature, IoMT devices are integrated into healthcare organizations’
Health Technologies

3 Health Tech Trends to Watch in 2023

Highmark Health also uses network access control technology to ensure computers are registered and allowed to join the network. The