
By Christian Espinosa, CEO & founder, Blue Goat Cyber
AI is already reshaping patient care – from faster diagnosis and risk scoring to robotic-assisted surgery and continuous monitoring.
But AI in MedTech is moving at startup speed, while security and safety practices are still moving at regulatory speed.
That gap isn’t theoretical. It’s where real patients get hurt, and real submissions get delayed.
In 2026, the medical device manufacturers that treat AI safety and cybersecurity as clinical requirements, not checkboxes, will be the ones who get ahead in the game.
Not all AI in medical devices is created equal. We lump everything under “AI” and forget that the risk profile is wildly different depending on how specific types of devices are used.
Low-risk applications, such as image enhancement, workflow triage, and administrative scheduling, can often be deployed safely with the right controls and human oversight.
High-risk applications are different: AI driving therapy decisions, guiding life-sustaining devices, or informing surgical actions in near real time.
Rolling those out at scale without strong security and human supervision is premature and dangerous.
AI failures in these systems aren’t just “bugs.” They turn into misdiagnosis, inappropriate treatment, delayed care, and clinician burnout when nobody trusts the recommendations.
If we expect doctors and nurses to rely on AI-enabled devices, we owe them transparency on how those systems were trained, monitored, and secured.
One of the biggest emerging risks for 2026 is model and data poisoning.
Most people in the MedTech industry have heard the term, but they still think of it as an academic problem. It isn’t.
If an attacker or even a flawed process introduces bad data into a model’s training or update pipeline, the model can quietly shift its behaviour.
A medical imaging model might start missing subtle tumours. A risk-scoring system might under-prioritise certain patients.
There’s no red flashing light when this happens, performance just quietly degrades in real patients.
Right next to model and data poisoning is algorithmic bias.
We’ve seen the headlines about biased hiring and credit models, but the same mechanics apply to MedTech.
A model trained mostly on one demographic will underperform on other populations. That’s not just a fairness issue – it’s a safety issue.
Regulators will increasingly expect proof that training and validation data accurately reflect the intended patient population, and that manufacturers are monitoring performance drift and bias over time rather than treating validation as a one-and-done exercise.
The second major shift I foresee for 2026 is security-by-design transitioning from a ‘nice-to-have’ language in slide decks to a non-negotiable expectation.
For years, cybersecurity in devices has been treated as something you “add” late in development.
A product is nearly ready, then someone is told to “handle the cybersecurity.”
That approach is already responsible for costly submission delays and last-minute redesigns. With AI in the mix, the cost of that mindset goes up dramatically.
Security-by-design for AI means bringing threat modelling and risk assessment into the requirements phase, not bolting it on after the architecture is locked.
It means treating training data, model pipelines, update mechanisms, and hospital integration points as part of the attack surface. And it means continuous monitoring of safety and security, not just pre-market testing.
Frameworks like Good Machine Learning Practice and emerging guidance from regulators such as the FDA, MHRA, and Health Canada are all pointing in the same direction: AI safety and cybersecurity must be engineered in from the start.
The third prediction: AI will increasingly be used to defend AI in MedTech.
Static controls and annual penetration tests are not enough when models and data flows are constantly changing.
We’ll see broader adoption of AI-driven monitoring systems that learn what “normal” looks like for a device – traffic patterns, usage, outputs – and then flag anomalies that may indicate poisoning, tampering, or misuse.
These systems won’t replace human judgment, but they will give security and clinical teams early warning before subtle changes turn into harm.
All of this is happening against the backdrop of one of the harshest environments a device can be placed into: the modern hospital network.
Ransomware crews don’t wake up targeting “Device X,” as their scans don’t care what they hit. Flat or poorly segmented networks, legacy systems, and a patchwork of vendors create a huge attack surface.
As AI-enabled devices become more interconnected, a compromise in one corner of the network can have cascading effects.
In 2026, hospital Chief Information Security Officers (CISOs) and procurement teams will push harder for evidence that AI-driven devices can survive in this reality, not just in a lab.
The fourth and most important theme for 2026 is collaboration.
No manufacturer, regulator, or hospital can solve AI safety and cybersecurity alone.
The most credible AI-enabled devices will be those built with early regulatory engagement, honest dialogue with hospital security and clinical teams, and transparency about what the AI can and cannot do.
That includes being clear about training data, limitations, monitoring plans, and how the system will be updated safely over time.
The future of AI in healthcare is genuinely exciting. But we are not yet at a point where high-risk AI belongs everywhere in mainstream care.
If we rush ahead without addressing cybersecurity and safety, we won’t just create technical debt – we’ll create clinical harm and lose trust that will be incredibly hard to win back.
In 2026, the MedTech companies that stand out will be the ones that stop treating cybersecurity as a separate concern and make it the foundation for AI-enabled devices.
If your AI roadmap doesn’t have a cybersecurity roadmap next to it, you’re not ready.
