Access to essential health solutions remains out of reach for over half of the world’s population, with nearly 1.3 billion people spiralling deeper into poverty due to medical costs.
By Dr Ricardo Baptista Leite, CEO at HealthAI
This widening chasm of disparity turns health into a privilege of the few. The intervention of Artificial Intelligence (AI) could do wonders to bridge the gap, offering life-changing support to populations that have long been underserved.
Yet, its adoption in health systems is much slower, particularly when compared to other sectors. Building public trust is crucial to driving adoption, and for
Recognising inherent issues
It requires dedicated and systematic attention to ensure that past inequities faced by underprivileged populations don’t continue to be carried forward, supersizing disparities in medical care and creating a cycle that perpetuates across generations.
Algorithmic bias creeping into the development and implementation of modern healthcare technologies could reinforce bias, further access to essential treatment for those who need it the most.
For instance, a key issue with AI in healthcare lies in its reliance on healthcare costs as a proxy for health needs. Research has shown that algorithms used to assess patient risk often assign lower risk scores to black individuals compared to equally sick white patients.
Since black patients have historically received less medical care due to systemic inequalities, leading the AI to assume that they require fewer resources, further compounding the issue. This bias makes them less likely to be referred for critical, personalised care programmes, resulting in reduced access to care and, ultimately, lower chances for better health outcomes.
A 2023 study by Stanford Medicine revealed concerning faultiness with generative AI models like ChatGPT— they often provided inaccurate answers to medical questions and reinforced harmful, outdated beliefs about biological differences between people on the basis of their skin colour.
As AI tools become widely accessible, such as in the case of generative AI, it’s important to make sure it doesn’t unintentionally amplify the disadvantages people already face. Those without adequate healthcare coverage, often from marginalized communities, are more likely to turn to generative AI for assistance in finding healthcare providers or mental health support.
This can be an extraordinary opportunity for people who don’t have any access to health or care. However, if these systems are trained on biased data that fail to account for structural inequalities, they may provide misleading or inadequate recommendations, further undermining the very people who rely on them the most. A cruel paradox.
Gender bias presents another serious concern. For instance, some of the most “accurate” disease-screening algorithms were identified to perform significantly worse for women than for men. A recent study revealed that AI models designed to detect liver disease from blood tests were twice as likely to overlook the condition in women compared to men. Gender bias, by delaying crucial interventions for women, undermines the pursuit for more inclusivity in matters of health.
These issues represent a broader flaw in how AI-driven health decisions could work counterproductively, reiterating the urgent need to refine AI systems. Post-market surveillance can play a key role in mitigating risks, minimising harm, and in countering the negative consequences of bias, ensuring that AI serves everyone fairly, regardless of demographics.
Media literacy is the difference between fact and fiction
It’s not surprising that illiteracy breeds exploitation. The prevalent misuse of AI, particularly targeting vulnerable populations such as the elderly or those with limited access to credible information can negate the progress made by public healthcare efforts.
In places like Nigeria, where media literacy varies widely across different demographics, deepfakes and other AI-driven technologies are increasingly being misused to spread misinformation about health practices and medical conditions. Those who are already vulnerable may not have the critical skills to discern between reliable and fabricated pieces of information, making them particularly susceptible to the dangers of misinformation.
Deepfakes can be extremely persuasive and journalistic in their appearance, often featuring well-known figures and imitating credible health institutions or media outlets to promote dubious health products, such as treatments for hypertension.
The spread of misinformation using AI not only endangers individuals by pushing them towards making harmful health decisions, but also hampers overall public health initiatives halfway. When communities are misled or manipulated by false claims, it becomes harder for them to trust even legitimate medical advice, healthcare providers, and institutions.
The idea of entrusting something like AI with healthcare and medical decisions could perpetuate fears and suspicions about the technology itself. What if the first encounter you had with a powerful technology – one capable of improving your life – was tainted with manipulation and deception?
Crumbled trust is not easy to restore. Widespread scepticism and resistance to the adoption of AI-driven solutions are the last things we need standing in the way of equitable healthcare for all. If people don’t trust AI, they will not use it, especially when it comes to their own health or the wellbeing of their loved ones.
Media literacy is a prerequisite for extending the benefits of AI to everyone, no matter where they are in the world. While it’s unrealistic to imagine deep fakes and misleading content being altogether banished from existence, a focus on equipping individuals to critically assess the media they consume could be a more practical approach.
Providing targeted education on how to keep a critical mindset and spot digital manipulations can help people safely navigate the confusing world of information, enabling them to distinguish fact from fiction. This empowers them to take control over their own media consumption, prompting them to think before rushing to conclusions. To reap the benefits of modern technology, its beneficiaries must have an informed and healthy relationship with it.
How do we get there?
A multidimensional and humane approach is crucial to rectifying algorithmic bias in health systems. Participatory governance, as a foundational principle, plays a pivotal role in ensuring that a diverse range of stakeholders, including those from historically marginalised communities, are actively involved in the development and ongoing evaluation of AI systems.
This allows for the identification and incorporation of the unique needs and experiences of underrepresented groups. Furthermore, establishing feedback mechanisms where users, particularly from vulnerable demographics, including patients, can report harm or bias in AI-driven systems is key.
Implementing strong legal and ethical safeguards is vital. Transparent regulatory frameworks must ensure that AI systems are held accountable, with clear standards to address bias and ensure fairness.
Just as rigorous regulations have long safeguarded the quality and safety of pharmaceutical products worldwide, similar safeguards can deter unethical practices and ensure that AI in healthcare is developed and deployed responsibly. These measures would not only protect vulnerable populations but also inspire trust in AI’s potential to enhance healthcare.
In parallel, conducting regular fairness audits and integrating bias-detecting algorithms into the AI development process ensures that any bias present in the system is addressed at the outset before they can affect patient care.
Needless to say, training AI systems on diverse and representative datasets is non-negotiable, not only to improve accuracy but also to ensure that medical decisions are based on a broader, more inclusive set of information.
More often than not, context makes a world of difference, especially when lives are at stake. For example, with datasets that account for historical disparities, algorithms can be better equipped to make more informed and equitable predictions. Human beings have stories and, in this case, acknowledging them can positively transform lives.
HealthAI – The Global Agency for Responsible AI in Health, is a Geneva-based, independent nonprofit aiming to drive equitable access to AI-powered health innovations. It collaborates with governments, international organisations and global health leaders.
Dr. Ricardo Baptista Leite is a medical doctor trained in infectious diseases with over 15 years of experience in global health, health systems, and science-based policymaking. Prior to his current role, he served four terms as a Member of Parliament in Portugal on both health and foreign affairs committees.