Health Technologies

New ETSI standard outlines baseline cyber security requirements for AI models and systems – HTN Health Tech News

The European Telecommunications Standards Institute has announced the launch of a new standard, ETSI EN 304 223, outlining minimum cyber security requirements for AI models and systems as the “first globally applicable European Standard (EN) for AI cyber security”.

The new standard is designed specifically for AI systems to protect them from sophisticated cyber attacks, pointing to the need to secure against emerging forms of risk such as data poisoning, model obfuscation, and indirect prompt injection.

It outlines 13 principles and requirements across five phases: secure design, secure development, secure deployment, secure maintenance, and secure end of life.

“The EN will be instrumental for stakeholders throughout the AI supply chain, from vendors to integrators and operators, and will provide them with a clear and logical baseline for AI security,” ETSI shares. “Its scope covers AI systems incorporating deep neural networks, including generative AI, and is developed for systems intended for real-world deployments.”

A technical report is to be published shortly to offer a domain-specific application of the principles to generative AI, including deepfakes, misinformation, and copyright.

Scott Cadzow, chair of ETSI’s Technical Committee for Securing Artificial Intelligence, said: “At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy, and secure by design.”

Wider trend: AI standards and guidance 

Health Level Seven (HL7) has launched an AI Office, with the aim of setting up foundational standards around the use of safe and trustworthy AI when driving international transformation in healthcare. The AI Office is said to focus on four strategic workstreams, each of which is designed to make sure any emerging technologies are “trusted, explainable and interoperable”, as well as being scalable across clinical, operational and research settings on a global scale. The first workstream looks specifically at standards, with the AI Office aiming to build an AI-ready interoperability stack and develop frameworks around safe and explainable AI tools.

The CQC has issued guidance on the use of AI in GP services, sharing what it looks at when assessing safety and compliance across areas including procurement, governance, human oversight, learning from errors, data protection, and staff training. Assessors will check AI tools have been procured in line with relevant evidence and regulatory standards such as DCB0160 and DTAC, also reviewing clinical governance arrangements to check appropriate and safe use.

HTN was joined by Neill Crump, digital strategy director at The Dudley Group NHS Foundation Trust, and Lee Rickles, CIO at Humber Teaching NHS Foundation Trust, to discuss practical steps health and care organisations can take to prepare for AI. Neill and Lee shared details of their current work and their journey to date, best practices, learnings, challenges, and the opportunities that lie ahead.

Avatar

admin

About Author

You may also like

Health Technologies

Accelerating Strategies Around Internet of Medical Things Devices

  • December 22, 2022
IoMT Device Integration with the Electronic Health Record Is Growing By their nature, IoMT devices are integrated into healthcare organizations’
Health Technologies

3 Health Tech Trends to Watch in 2023

Highmark Health also uses network access control technology to ensure computers are registered and allowed to join the network. The