Health Technologies

Securing the future of healthcare by building trust in AI – Digital Health Technology News

The healthcare industry has long been at the forefront of technological advancements, having pioneered the use of 3D printing and robotics in surgery. The emergence of AI in the sector has spurred another push to innovate and help more people than ever before.

While AI-assisted healthcare technology holds great promise, many patients are wary of the potential downsides associated with its use. A recent UK Government report on public attitudes to data and AI found that, although the public increasingly recognises the potential benefits of data use, they remain sceptical regarding data security and its potential impact on society. It particularly highlighted the increasing levels of pessimism around the latter over the past year or so.

Recent data has found that only 28% of UK adults are confident in AI or technology companies ability to protect their health information. This is despite the overwhelming popularity of health-focused smart devices like Apple Watches and Fitbits.

A further 44% of UK adults wouldn’t trust AI to handle health-related tasks, compared with just 6% of American adults, showing this mistrust of AI amongst those in the UK is deeply entrenched when it comes to healthcare.

As the demand for AI-powered healthcare products increases, it’s crucial that distrust from consumers is addressed head-on and that the industry takes advantage of user experience (UX) insights to foster trust in AI in healthcare.

Understanding AI and the root of distrust

Distrust in AI is not new or unique to the healthcare industry. Since its inception, critics have identified risks related to data privacy, accuracy, and transparency when it comes to using AI, and have cautioned against the technology’s lack of accountability.

Research has also found that over 70% of UK respondents had little to no confidence in technology companies’ ability to protect user health information shared with them. Over 50% of respondents cited concerns about companies selling health data, using it for advertising or research without permission, data leaks and identity theft. This shows that not only are people nervous about AI, but they have more general fears about how health data can be abused by corporate interests.

Patient data is quite rightly heavily protected, and data breaches even within the NHS are severely penalised. Any new technology with access to patient data will be held to the highest security standard and requires thorough vetting. Most forms of AI available to the public don’t yet meet this standard and more development is needed before it becomes a reality. If companies want to see wider adoption of AI in the healthcare sector, addressing these concerns about transparency and trust is crucial.

When it comes to healthcare, there are additional concerns. Like all forms of machine- learning technology, AI is only as good as the data it’s trained on. Any biases in the original data will lead to the AI making biased decisions, resulting in incorrect or harmful outcomes. With something as complex as health, which can be affected by countless external factors like environment and socio-economic status, the potential for misdiagnosis is considerable. When human doctors with a wealth of training and experience can still struggle to make accurate diagnoses in complex cases or allow their prejudices to influence treatment recommendations, AI-based healthcare systems will also inevitably struggle.

Fears around job displacement caused by AI systems is another area of concern according to The Health Foundation, leading to unemployment in the sector. The UK Government also found this was something the public was apprehensive about in its ‘Public attitudes to data and AI: Tracker survey’, with job displacement and human deskilling of particular concern.

While this isn’t an immediate concern for many healthcare workers, it is something to bear in mind and will have to be addressed as AI becomes increasingly advanced and able to manage complex data pools. That said, AI could potentially benefit the NHS by shortening waiting times and speeding up diagnoses by analysing vast amounts of data, identifying patterns and automating administrative tasks for doctors.

For all the benefits of AI in streamlining healthcare services, addressing customer needs, and reducing the burden on healthcare providers, there is a clear friction between the unfettered use of AI and the healthcare industry that needs to be addressed.

What is trust-centric design?

We now have multiple methods for monitoring our health using smartwatches, apps, and at home testing kits, which has resulted in a heightened awareness of ethical data practices and potential data misuse.

And as we’ve established, trust is crucial for successfully implementing AI in healthcare. People need to have faith in their healthcare providers, systems and products, and anything that shakes this faith can have a devastating impact on a patient’s willingness to engage with healthcare services.

Any products designed for use in healthcare must be thoroughly tested and optimised, so as not to add to the mistrust that already exists when it comes to AI more widely. To do this, businesses should incorporate trust-centric design into their product and system development processes.

Trust-centric design is a philosophy that prioritises building trust between users and technology when designing new processes and products. Trust-centric design can effectively address the reasons behind distrust of AI when it comes to healthcare including patient safety, data privacy and ethical considerations.

Trust-centric design also helps build user confidence in AI and their healthcare provider, by ensuring accuracy, transparency, security and fairness. This is crucial to the successful implementation of AI in healthcare, because patients are more likely to follow advice they trust, leading to better outcomes. It also means users are more likely to trust new tools to support their health and will allow for continuous improvement in healthcare in future. In addition, a trust-centric design empowers better healthcare decisions by putting patients at the heart of the design process and encourages understanding, transparency, and human oversight, during the design process. Developers can use this scheme to help foster trust in AI healthcare products and ensure that these technologies are used to improve patient outcomes rather than causing harm.

Principles for designing trustworthy AI healthcare

So how can we start implementing this new approach to AI in healthcare?

Start with the basics: be open about your AI’s capabilities, limitations, and decision-making processes, provide clear explanations for AI-generated recommendations or decisions, and avoid Black Box Models. This will help maintain accessibility and transparency. Fundamentally, users need to understand how AI is making decisions. By providing clear explanations from the outset, you can build trust and head off any concerns about potential bias or errors.

To address problems around accountability and oversight, ensure human experts are heavily involved in overseeing new AI systems and critical decision-making. It’s also important to put error-handling processes in place at the outset to quickly identify and address any errors or biases in AI outputs. It helps to think of AI as a tool to assist healthcare professionals, not replace them. Therefore it only makes sense to have humans involved from the outset.

Finally, when creating a human-centric design some basic principles can apply to any kind of AI technology. The first is creating intuitive and user-friendly interfaces that facilitate interaction with AI, with tailored recommendations to individual user’s needs and preferences. Adding a robust feedback mechanism will allow for continuous improvement and learning, maintaining AI’s status as a helpful assistant, rather than something unfit-for- purpose.

As AI continues to reshape the landscape of healthcare, addressing the trust deficit is the first step towards realising its full potential. Happily, this need for trust is already being recognised by lawmakers and industry stakeholders. By prioritising user experience and adopting a trust-centric approach to design, we can bridge the gap between scepticism and acceptance, empowering healthcare providers to embrace AI as a valuable ally using the principles listed above.

Avatar

admin

About Author

You may also like

Health Technologies

Accelerating Strategies Around Internet of Medical Things Devices

  • December 22, 2022
IoMT Device Integration with the Electronic Health Record Is Growing By their nature, IoMT devices are integrated into healthcare organizations’
Health Technologies

3 Health Tech Trends to Watch in 2023

Highmark Health also uses network access control technology to ensure computers are registered and allowed to join the network. The