Health Technologies

COMMENT: AI and NHS data: Innovation must be built on trust – Health Tech World

By Davi Ottenheimer, VP of Trust and Digital Ethics, Inrupt

Aneurin Bevan, the Labour minister who founded the NHS, famously said he had to “

In January, Prime Minister Sir Keir Starmer’s unveiled the AI Opportunities Action Plan.

The plan presents a bold vision for making the UK “an AI superpower,” with healthcare data playing a central role through a proposed national data library.

However, the idea raises a fundamental implementation detail: how will NHS patient data be handled, protected, and shared in this new ecosystem – and who ultimately benefits from its value?

Machine-based intelligence, an extension of the concept of doctors themselves, offers the potential to revolutionise diagnostics, personalise treatments, and ease the burden on NHS services, as evidenced by recent NHS England initiatives using AI to predict patients at risk of becoming frequent users of emergency services.

Success, however, depends entirely on public trust in a system asking for authority over their lives.

The predatory and extractive data models that have dominated the American tech industry for decades pose serious risks when applied to UK healthcare: private companies can monetise patient information without appropriate compensation, build monopolistic advantages through exclusive access to public health data, and create closed-box systems that make rapid critical healthcare decisions without transparency or accountability.

The NHS is one of the UK’s most valued institutions, and any attempt to introduce AI into its operations must prioritise ethical data governance that prevents the many known and documented harms.

This becomes even more critical with the £14 billion investment from technology companies like Vantage Data Centres, Nscale and Kyndryl to build AI infrastructure—capital that must be deployed with proper governance to ensure it serves the public good rather than disappear into weakly regulated private interests.

The history of NHS data-sharing missteps, such as the failed Care.data initiative, serves as a reminder that public resistance can logically derail even well-intentioned digital transformation projects.

Data exploitation has real consequences: from inflated drug prices when pharmaceutical companies gain exclusive insights, to algorithmic discrimination that can worsen health inequalities, to the erosion of doctor-patient confidentiality when sensitive information becomes a commercial or even political asset.

When questioned by journalists about private firms accessing health data for commercial use, Starmer responded that it’s “important we keep control of that data” while adding that he doesn’t think “we should have a defensive approach” to its use.

This delicate balance of default deny becoming a norm, requires more than vague assurances—it needs robust privacy frameworks, transparent governance, and most importantly, a technology solution with a structural approach to prevent extraction by design, rather than merely chasing slippery oligarchs after they flee.

The Role of Patient Data in AI’s Success

The NHS already relies on patient data to improve healthcare delivery, such that AI introduces new complexities into old concepts.

The best doctors need the least information to make the quickest assessments for the lowest cost, whereas many AI solutions remain stuck in an expensive and slow “bigger is better” mindset.

AI-driven models request vast amounts of data on the promise of enhanced predictive diagnostics and automated decision-making.

However, without strict oversight and patient control of data relevant to them, AI models risk being trained with outdated or even poisoned and biased data, leading to dangerous healthcare outcomes.

If patients feel they lack control over their own data, they may become guarded and withhold critical health information, which could, in turn, reduce AI’s effectiveness in diagnosing and treating conditions.

Europe has been tackling this with initiatives such as the European Digital Identity Wallet (EUDI Wallet) under eIDAS 2.0, which aims to give individuals more appropriate control over their data.

The NHS is similarly positioned to evolve its app into a comprehensive digital health wallet, where patients — not just institutions — manage and control access to their records.

The W3C Solid protocol, an open standard for decentralized data storage developed by web inventor Sir Tim Berners-Lee, already is proving to be a simple technical foundation for these wallets.

The Solid protocol is to application data what the World Wide Web was to unlocking documents from proprietary formats — a standardised way to store, access, and share personal information while maintaining user sovereignty.

This shift represents more than just a technical upgrade to the existing Web; it is a necessary evolution that ensures digital healthcare empowers individuals rather than turning them into captured and exploited  data sources.

The Greater Manchester NHS Data Pilot has already demonstrated the benefits of patient-controlled data.

By securely connecting clinicians with patient-held information, the programme improved care outcomes for vulnerable groups, particularly dementia and elderly care patients.

This model should serve as a blueprint for the government’s AI strategy. As Brigitte West, product director at DrDoctor, noted about the plan, “This is much-needed recognition that AI can be used for operational reasons – so that clinical staff can spend less time on admin and more time delivering care.”

Such gains must have a focus on positive outcomes for the patients, to maintain integrity of engagement and trust.

Avoiding the pitfalls of a Big Data model

Perhaps there’s a humorous pattern worth noting, where each Labour government since Wilson has announced a revolutionary technological solution to NHS inefficiencies, only for the subsequent administration to discover the same problems with new terminology.

From “computerisation” to “digitisation” to today’s “AI revolution,” the cycle continues with remarkable predictability.

This latest AI strategy raises concerns about allowing a domain shift in terminology to open the door for proprietary, large-scale data platforms to lock away NHS patient information.

While aggregating data in massive repositories may have a certain lobby group calling it efficient, logically we can say it often restricts flexible access, introduces long-term vendor dependence, and limits the NHS’s ability to handle privacy requirements.

In such “Big Data” vendor setups, a single breach can be far more damaging, and it is unclear how patient information ever will be adequately secured at scale.

The more intellectually honest alternative is to adopt a user-centric open standard for data storage such as W3C Solid, which orients health information around patients while still allowing care providers to access necessary data.

Solid offers a far more scalable infrastructure where each patient’s “data wallet” contains their far more complete health record, accessible through explicit permissions related to clear usage patterns.

This architecture makes accidental large-scale data breaches virtually impossible while simultaneously improving data availability and integrity.

Just as we recognise that individuals thrive better in their own environments with healthcare professionals visiting only when needed, our data should likewise remain under our control with access granted only when necessary.

Forcing all data to live in centralised repositories undermines this principle of autonomy. Our data should be as safely organised as ourselves.

Solid-based frameworks can satisfy healthcare needs without gathering all data into a single repository, using personal data pods with fine-grained access controls.

These standardised, user-controlled methods balance security, accessibility, and individual autonomy — helping the NHS maintain flexibility, comply with regulations, and uphold public trust when integrating AI into healthcare services.

The Future of AI in the NHS: Balancing Innovation and Trust

For AI to succeed in the NHS, transparency and patient empowerment must be at the core of its implementation.

Patients need to understand how their data is being used and must be given meaningful control over data-sharing decisions.

Clear, user-friendly consent mechanisms are essential to ensuring that digital healthcare does not become a system where individuals are monitored rather than active participants in their own care.

AI decision-making must also be explainable and accountable.

Patients and clinicians alike should be able to understand how AI arrives at medical recommendations and can challenge decisions when necessary.

Without such safeguards, AI risks reinforcing systemic biases, making opaque decisions, and diminishing trust in NHS services.

A Healthcare Future That Works for the People

The AI Opportunities Action Plan, with its 50 recommendations from government AI advisor Matt Clifford, has the potential to modernise healthcare and transform public services as part of what Starmer calls the “Plan for Change.”

But it must be built on a foundation of public confidence.

The term “national data library” is actually quite telling – libraries aren’t centralised repositories where books permanently reside; they’re distributed networks of knowledge where materials are borrowed, used, and returned.

Similarly, the government should support a Solid-based infrastructure where patient data remains in personal “home libraries” – with AI and healthcare services visiting temporarily when needed, like doctors making house calls.

This achieves the same research and care goals while maintaining patient sovereignty.

Just as the NHS revolutionised healthcare access in the 1940s, Solid has the potential to revolutionise healthcare data in the 2020s by giving patients true ownership of their information while enabling secure, permissioned access for AI tools.

With the establishment of “AI growth zones” like Culham in Oxfordshire offering faster planning processes for AI businesses, the NHS cannot afford to repeat the mistakes of past data-sharing initiatives that failed to prioritise patient rights.

AI in healthcare is not just a technological challenge; it is an ethical one.

Trust, transparency, and patient empowerment must not be seen as barriers to AI adoption but as essential building blocks for a system that benefits everyone.

Just as Bevan understood that healthcare must be available to all regardless of wealth, we must ensure that the benefits of AI-driven healthcare are equitably distributed while patients retain sovereignty over their data.

That would be an NHS fit for the digital age that remains true to its founding principles.

Avatar

admin

About Author

You may also like

Health Technologies

Accelerating Strategies Around Internet of Medical Things Devices

  • December 22, 2022
IoMT Device Integration with the Electronic Health Record Is Growing By their nature, IoMT devices are integrated into healthcare organizations’
Health Technologies

3 Health Tech Trends to Watch in 2023

Highmark Health also uses network access control technology to ensure computers are registered and allowed to join the network. The