Health Technologies

Managing EHR complexity, addressing technical challenges, and achieving a balance between customisation, compliance, and scalability – htn

For a recent HTN Now panel discussion on the topic of managing EHR complexity, we were joined by Paul Charnley, former CIO and chair of the NHS Blueprinting Programme, and Mike Hardman, principal engineer and EPR technology lead at Aire Logic, who shared some of their insight and experience on overcoming challenges around EHR design and implementation.

By way of introduction, Mike shared his background, including his 25 years’ worth of experience in the wider technology space, and his work over the last four years in digital healthcare. “In that time, I’ve worked on a lot of EHR projects,” he told us, “and I’m usually the guy that gets called in to firefight when there are EHR issues.”

Paul introduced himself as “a recovering NHS CIO”, adding “now they’ve put me out to pasture I’ve been seconded by NHS England to help with digital blueprinting”. On his experience around EHR implementation, he talked about a current programme of work focusing on “setting trusts and ICBs up for success” in this regard, including “the balance between standardising the configuration of systems, customising them, personalising them, and figuring out how they fit into the broader ecosystem of solutions”.

Discussing the challenges around EHR complexity

Mike took us through some of the challenges around EHR complexity, saying, “the EHR is an extremely important and high pressure piece of technology – it’s touched on by nearly every function and used at every stage of treatment, as well as offering that access to records which is foundational to providing good care. It’s integral it’s delivered in a timely and accurate manner.”

Whereas in other industries “it’s not as big a deal if information is slightly out-of-date”; in health it’s absolutely essential that alerts about things like medications and results are as close to real-time as possible, Mike said. “That’s extremely difficult in a care setting, where we don’t have control over the number of people receiving treatment, or how many clinicians will be requesting data. We’re also operating in a resource-constrained environment, so we’re not a revenue generating industry that can throw more resource at a problem; we have to do it within a very restricted cost model.”

Sharing that he’s often asked why EHR implementation “has to be so complicated”, Paul told us that “it’s because it’s about people, and people are complicated – both as users and subjects of the system”. Really, he added, “it’s where sciences like biology and chemistry meet the technology, and meet the bureaucracy; the bureaucracy is complicated in itself, because we have to deliver APIs and that kind of thing.”

Paul also outlined other challenges relating to the high volume of data and managing complex data flows, as well as the regulations which accompany this, saying, “we don’t exist in isolation – we sit in a complex matrix of organisations, and when you multiply up the various combinations of these things then the challenges get exponentially more difficult to manage as a system with relatively low bandwidth to make things happen between those systems. It has gotten easier in some respects and a lot more difficult in others.”

Mike agreed with this sentiment, saying that whilst healthcare has always been complex, “trying to build the systems that account for the level of changes and growth in healthcare will always be a challenge, and it doesn’t matter how much technology we bring in, it’s not going to simplify the fact we’re always playing a game of catch-up in the healthcare space.”

From a tech provider perspective, Mike shared that it’s also challenging keeping pace with medical advancements and advancements in treatment, and that that can often mean needing to change “potentially everything” about the way certain aspects of care are dealt with. “If we don’t treat our systems as an ongoing project, that’s where the problems really start magnifying, and we need to stay on top of that maintenance aspect, changing as the field moves,” he said.

Addressing technical challenges

Mike noted his approach to addressing technical challenges is often “taking a step back, trying to break things down, and looking at the systematic view”. Standards like FIHR and HL7 “can solve some problems”, he went on, “but they don’t solve everything – one of the things I’ve noticed working with custom and off-the-shelf EHR products is that in a trust you might end up with multiple different ways things like blood pressures are taken and stored.”

Observing that “there are always ten different departments doing the same thing ten different ways”, Mike talked about how offering a standard approach to “collecting some simple piece of information like a blood pressure reading”, can help “magnify the capability of a team to deliver”. The same thing also happens between trusts, he said, “and once that’s standardised, you can move on to what makes the treatment in your trust special, rather than reinventing the wheel – I’m not saying every trust should be the same, but at least having a standardised way of collecting and storing some of this can really help amplify the capabilities of your technical teams.”

Drawing on a question from our live audience around whether convergence is a “step too far”, Paul said “I think what we’re saying is that if we have fewer larger pieces to put together, it does simplify the end problem, but the complexity adding to the people thing is the politics, as well as everything else you have to consider when making agreements.” Whilst some places are “successfully working together”, he continued, “there is a concern that as a result they’re only doing the things that affect most people, or most trusts, rather than some of the specialist things they might need to look at”.

There’s a balance between standardising and compromising, Paul went on. “I don’t think we’re trying to ruthlessly standardise, but we are trying to lead people to systems that also fit clinical standards. It’s hard to argue why you would want to be very different – what’s different is whether you do invasive heart operations in the same hospital or have to move them on, so we’ll have to build some variation in, but I don’t think we’ll ever get to a level of ruthless standardisation.”

“I think you’re absolutely right,” Mike said, “and there’s no real call for trusts to be completely standardised, because each has their own specialisation and capabilities, so actually complete standardisation would limit their operations. I think there’s something to be said, however, for introducing standard models of data, and at the very least the ways we collect and store particular pieces of information – by picking the most unique clinical areas and making sure we can handle the complex cases, and then sharing that out as a model across the trust, I think we can move toward at least a sort of records standardisation that would benefit everybody.”

Data longevity is another area in which the health industry differs from other industries, Mike highlighted, “because we look at a much longer duration of data storage than in nearly every other industry, and even though we don’t know what technology is going to look like in 100 years, we’re attempting to ensure our data is going to remain accessible for that length of time, and doing everything that we can right now to allow a compatible data structure out into an unseen future. I think standardisation of these sort of archetypes is a significant thing that we should be looking at nationally.”

Tackling another question from our live audience about knowing how to approach a trust’s technical landscape when systems and solutions have been added over time, Paul talked about observing a trend toward standardising “other systems that trusts own across their clinical network”, giving the example of Cheshire and Merseyside, where “all of the endoscopists use the same endoscopy solution”. That not only provides fewer systems to interoperate with, he said, “but it allows endoscopists to load balance work between departments because they know how to use each other’s solutions – I think that’s going to be something we do more of”. 

Although convergence has many benefits, Mike pointed to some of the risks surrounding the use of the same systems or solutions across a wider network, saying, “if we are converging our solutions, we’re potentially producing a single point of reliance and a single point of failure; there is something to be said for having these disparate systems in that we’re limiting our scope for failure relating to attacks or availability to just a single trust or hospital.”

Striking a balance between customisation, compliance, and scalability

When it comes to striking that balance between customisability, compliance, and scalability, Mike told us how he’s seen some successes with custom off-the-shelf “almost prescribed” solutions, and equally some successes with “completely home grown, in-house built” solutions, but that all “struggle to balance exactly those points”. Whilst trusts might get an “easier time” of things like compliance when there’s already a built-in compliance solution they can use, he continued, “they can struggle to really make that fit the uniqueness of their trust”.

On the other hand, when trusts adopt custom solutions, “they often struggle ensuring they have the appropriate compliance”, Mike went on, “so it’s almost as though you have this triangle of things you need to have, and you can only choose two of them to be simple, and the other one is always going to be a pain.” It might be more useful for more specialised trusts to have a completely custom built solution, he considered, “since the customisation is something you’ll be doing a lot of”; whereas for trusts with “more standard” operations, “you may find a lot less of a customisation need, therefore you can get the benefits of pulling an off-the-shelf option, but ultimately you are going to have to pay the price in one way or another”.

“There’s also the other side of it,” Paul considered, “where trusts have taken an off-the-shelf product and successfully customised it to a great extent for their own purposes – but that makes it much more difficult to maintain in the long term, particularly as the underlying EPR functionality develops. That was my own experience, and I think we can trace it back to how much it needed Anglicising before it became something that could be shared across UK sites more comfortably.” There’s also the risk that some customisations will hold trusts back from adopting things like the cloud, he added.

Both Paul and Mike talked about the “tipping point” which comes when trusts heavily customise their solutions and it becomes more difficult to maintain, with Mike saying, “there’s definitely a point where you maybe should have thought about just building this as a completely custom solution; exactly where that lives is difficult to decide, it’s having someone who has the foresight and the confidence to make that call.”

Best practices around design and performance

Mike talked about how during the design and planning process, he’s seen trusts “fail to give themselves enough headroom for the duration of their solution”, and needing to keep scaling up their hardware, storage, or processing capabilities “because they’ve massively outgrown where they started”. Those trusts he has worked with have “rarely built-in a scale-out capacity”, he told us, “which would mean building in a consideration for how a solution might expand from the very beginning”. A lot of trusts also choose to run at “80 or 90 percent capacity”, he went on, “which gives them no wriggle room for the bursts that might happen during a normal treatment day”.

Building in that idea of capacity and appropriate headroom from the start is something that is absolutely necessary, according to Mike, “and that’s something I see missed very often from the start of the process – just because everything is fine today doesn’t mean you’ll be fine tomorrow once that extra one percent of data goes in there”. Having somebody keeping an eye on capacity and headroom, and “having that as a headline piece for them”, is a good way of ensuring this isn’t overlooked, he recommended. “Something we often forget with technology is that we need to be accounting for change – that’s the point of agile, right? We’re not just building these systems and then running away from them; they’ve got to keep growing with us, and that has to be built in from the start.”

“Making sure the user gets what they want is important, too,” Paul shared, “and we’re noticing that those doing well with this have tended to shift from a governance that’s about a digital programme to implement an EPR, to making digital part of the transformation programme, rather than separate”. That switch is “something we should be considering,” he went on, “like not having a separate digital strategy, but having a service strategy with digital threaded all the way through it”. Remembering that vendors are “not the enemy” when working with them is another important point, Paul went on, “and they are part of the answer when it comes to giving us the backup and skills we need to address some of these issues we’ve identified here”.

We’d like to thank Paul and Mike for taking the time to share their thoughts and insights with us on this topic.

Avatar

admin

About Author

You may also like

Health Technologies

Accelerating Strategies Around Internet of Medical Things Devices

  • December 22, 2022
IoMT Device Integration with the Electronic Health Record Is Growing By their nature, IoMT devices are integrated into healthcare organizations’
Health Technologies

3 Health Tech Trends to Watch in 2023

Highmark Health also uses network access control technology to ensure computers are registered and allowed to join the network. The