Health Technologies

How the EU AI Act will transform health tech leadership

Odgers Berndtson’s Mike Drew and Chris Hamilton, discuss the impact of the EU AI Act on health tech leadership

Healthcare is poised to undergo profound transformation by AI, arguably more than any other industry.

Already used in an array of applications, from diagnostic tools to patient monitoring and personalised medicine, the AI revolution will only accelerate a trend well underway.

Over the next six years, the AI healthcare market is expected to grow by nearly 40 per cent, heralding an era of widespread adoption of the technology in the global healthcare industry.

The EU AI Act, being phased in over the next 24 months, will therefore be particularly consequential for health tech leaders.

Most AI systems will need to comply with this Act by the first half of 2026, with provisions for general-purpose AI to be regulated earlier in 2025.

While the UK and US are developing their own local regulations, the EU’s law will be the global standard by which corporate AI is regulated.

It is also the law health tech leaders should pay serious attention to – necessitating adjustments in how their companies use, develop, and manage AI technologies.

Below, we examine how the EU’s AI Act will change health tech leaders’ approach to compliance, risk, and ethics, and its impact on health tech leadership more broadly.

Accountability and AI impact on patients

 The EU AI Act mandates strict compliance protocols that will reshape health tech operations, emphasising enhanced accountability and safety in AI applications.

For health tech companies, this will require a mandatory impact assessment for AI systems, such as diagnostic tools and treatment algorithms, registration with authorities, and adherence to specific technical standards.

Health tech leadership teams and boards will need to implement robust governance structures to ensure transparent data handling and maintain up-to-date technical documentation.

The purpose is to avoid biases in data and safeguard patient information.

The Act also requires companies to log AI decision-making as part of auditing, with the failure to comply resulting in severe penalties.

Bias, discrimination, and privacy violations

 Under the Act, health tech faces stringent risk management demands aimed at ensuring patient safety.

Importantly, the Act categorises health-related AI systems as “high-risk” due to their profound implications on health and fundamental rights.

Health tech leadership teams will need to incorporate AI risk management into their governance frameworks to assess and mitigate risks, including bias, discrimination, and privacy violations.

This means conducting thorough evaluations of AI algorithms, data sets, and the AI decision-making processes to identify potential biases or ethical concerns before these systems are deployed.

Moreover, it will require health tech leaders to implement ongoing monitoring to ensure AI systems continue to operate within ethical boundaries as they learn and evolve.

High ethical standards

Corporate governance frameworks must now incorporate AI ethical considerations.

Specifically, this will require adherence to high standards of fairness, accountability, transparency, and human oversight, to ensure responsible AI deployment.

Health tech leaders will need to ensure AI used in tools such as diagnostics, treatment planning, and patient monitoring comply with rigorous data handling and risk assessment protocols.

These aim to mitigate biases, thus safeguarding patient rights and ensuring equitable treatment outcomes.

In addition to logging decisions and assessing the AI decision-making process, health tech leaders should establish who is accountable for the outcomes of AI decisions, including any errors or harm caused.

A new frontier for health tech leadership

 Over the next 24 months, health tech leadership teams will need to expand their knowledge of AI corporate governance.

Given the complexities of the Act, we anticipate growing demand for senior compliance officers and specialists well-versed in AI governance and risk, with particular knowledge of its use in the healthcare industry.

Health tech companies will need leaders who not only understand the technical aspects of AI but who are also proficient in regulatory frameworks to ensure these technologies are developed and deployed responsibly.

Moreover, health tech board members will need to enhance their expertise or bring in knowledge in areas like AI accountability, legal compliance, and ethical AI usage to navigate the new regulatory environment effectively.

This shift will likely lead to the creation of new roles and the growth of existing positions to include AI-specific compliance and ethical oversight.

Strategic objectives within these positions may also need to be realigned to the Act’s rigorous demands.

While AI will remove the need for some jobs, it’s clear that using AI will require companies to expand certain leadership roles and create new roles to manage AI compliance.

Particularly in health tech, we expect C-suite roles to grow in scope, requiring the addition of new team members and even entire teams to manage AI corporate governance.

The next 24 months presents both an exciting and challenging new frontier for health tech leadership.

Avatar

admin

About Author

You may also like

Health Technologies

Accelerating Strategies Around Internet of Medical Things Devices

  • December 22, 2022
IoMT Device Integration with the Electronic Health Record Is Growing By their nature, IoMT devices are integrated into healthcare organizations’
Health Technologies

3 Health Tech Trends to Watch in 2023

Highmark Health also uses network access control technology to ensure computers are registered and allowed to join the network. The