Two NHS surgeons are using Azure AI to spot patients facing increased risks during surgery
As you’re reading this, more than 6 million people in England are waiting for treatment by the National Health Service. To put that into context, that’s 1 million more people than the entire population of Ireland.
The COVID-19 pandemic has made this situation worse, with staff shortages and the suspension of non-urgent operations resulting in another 2.3 million people being added to waiting lists since May 2020.
The U.K. government is investing 36 billion pounds (about $44 billion USD) in health and social care over the next three years to “embrace innovation” and cut waiting lists. Surgical hubs, virtual wards and artificial intelligence (AI) “are key to tackling the backlog and putting the NHS on a sustainable footing,” the government said.
Now, a team of medical professionals at one of the largest NHS trusts in the country is exploring how AI could help reduce waiting times, support recommendations from healthcare teams and provide patients with better information so they can make more informed decisions about their own care.
Orthopedic surgeons Justin Green and Mike Reed at Northumbria Healthcare NHS Foundation Trust have developed an AI model that helps consultants give their patients a personalized risk assessment of upcoming hip or knee operations. That is reassuring people at one of the most stressful and worrying times of their life.
“When I see a patient in clinic, they look me in the eye and ask, ‘Will I be all right?’ It’s very difficult to predict that, and I end up giving a fairly general answer,” Reed says. “I hope this technology will give me a better indication about what was going to happen to those people.”
Green adds that the specialist and the patient should always make a decision together about where an operation should happen, and that it should be in the best interests of the individual. But technology can enable them to have a more informed and accurate conversation by unearthing more relevant information.
For example, because the AI model is hosted in Microsoft’s Azure cloud and uses the Responsible AI dashboard in Azure Machine Learning, medical professionals are given a clearer understanding of why the AI has reached those conclusions. That’s critical in the ultra-cautious healthcare sector. Consultants can now see how the model works and have confidence that the advice they give to patients is based on accurate and reliable data.
Northumbria Healthcare NHS Foundation Trust is the third largest U.K. hub for joint replacements, performing around 3,000 such operations every year. Green and Reed have used the tool in a small number of interactions with patients who need hip and knee operations but they believe it can be applied to most areas of healthcare.
“I think this will be transformational for predicting surgery outcomes and risk,” Reed says. “This is just the start and there will be lots of areas we can look at, right across healthcare. It doesn’t need to be in orthopedics. The concept we’ve developed is completely transferable to predict risk from any surgery.
“We’ve already had interest from a number of organizations. One NHS trust wants to run a pilot because one of their hospitals has 10,000 people on its waiting list. They want to understand whether they can offer surgery in a smaller hospital to individuals who are at low risk of complications such as strokes and heart attacks. At the moment, those patients might have to wait a very long time for availability in a larger hospital.”
One key benefit of Green and Reed’s AI model is it’s helping NHS trusts allocate resources by identifying patients’ specific needs.
For example, someone in their early 60s who doesn’t smoke and has low blood pressure would be seen as a “low risk” patient for surgery. They could have their operation sooner if they decide to have it done in a smaller hospital that might not have the resuscitation and intensive care areas that are found in bigger hospitals.
Without the AI-powered personalized risk assessment, a large number of patients currently have to wait longer for an operation appointment at larger hospitals. That’s creating huge demand for services in larger hospitals when many of those patients could be safely operated on in smaller hospitals. In larger hospitals the beds may be taken by other acutely ill patients so surgery is more likely to get cancelled.
The risk assessment also allows patients at high risk of complications to decide whether they want the operation at all.
Green says: “We would consider a complication following surgery to be a bad result. It’s costly to the patient, it’s costly to the NHS, it can take time and it can stop someone else having an operation. It has a massive impact on the health system as a whole. Currently, we might give them a generalized risk score that says, ‘You tick these three criteria, therefore your risk of an unsuccessful operation is 7%, as opposed to the national 2%.’ There’s nothing personalized about that,” Green says. “As a patient, all I know is that I’ve got three ticks in seven boxes, my risk is a little bit high and I can do absolutely nothing about that. So, I might decide not to have the operation.
“Now, we can show them in very granular detail how the AI model behind that prediction is coming up with its result that’s based on hundreds of data points such as age, blood parameters, body mass index and previous medical history.”
That insight into the “how” is only possible because the model runs in Microsoft’s Responsible AI dashboard, which assists AI developers with the fairness, interpretability and reliability of AI models. Within the dashboard, the tools can communicate with each other and show insights in one interactive canvas to help with debugging and decision-making.
Sarah Bird is a principal group product manager at Microsoft and leads the responsible and ethical development of the Azure AI Cognitive Services.
“The Responsible AI dashboard bring lots of tools together and that’s really handy for a sector like healthcare, which has to make sure there aren’t significant errors in your AI model and why it’s making a particular decision,” she says. “The tools allow teams to govern their AI more effectively and help them use it responsibly.”
Mehrnoosh Sameki, Responsible AI Tools tech lead for Microsoft, adds that having a complete view of ethical AI principles is crucial when used in a healthcare setting.
“Azure Machine Learning Responsible AI dashboard enables ML professionals to train and deploy more transparent, robust and fair machine learning models in healthcare production cycles,” she says. “The dashboard insights could then be shared via a scorecard, which bridges the gap between machine learning and healthcare professionals, and provides an easy way to communicate the model performance insights and top features that impact patient-facing decisions.”
While Green and Reed’s AI model is hosted in the Azure cloud, the clinicians own and oversee the entire project and its applications at all times.
The dashboard can suggest potential gaps in the data that could give any clinicians using AI an incomplete view of a particular patient.
“Some of the Microsoft tools around responsible AI are really good and show where those biases are,” Green says. “Those dashboards are fantastic.”
Reed agrees and adds that having “explainable AI” is critical for a healthcare organisation.
He also says that even after many decades of experience in orthopedics, he was surprised by some findings that the Responsible AI dashboard helped him spot.
“I was looking at what the AI model looks for to predict a risk of a ‘moderately severe’ complication. The dominant one was age, which was pretty obvious, followed by high blood pressure, which also made sense. The third one was the number of platelets.” These are cells in the blood that help clotting.
Reed was surprised to see that platelets carry such a significant weight in determining the outcome from surgery when compared to the other factors, and it may lead to new areas of research. That finding would have to be validated with different approaches, but it shows how technology is helping medical professionals to think differently about care.
NHS teams building their own AI models – as Green and Reed have done – are becoming increasingly common, as the healthcare sector tries to manage increasing workloads and provide cutting-edge care to millions of people.
Earlier this year, Health Education England, which supports the delivery of healthcare to the public, published its first roadmap to the use of AI in the NHS, which showed that the healthcare sector “recognizes the power and potential for AI to increase resilience, productivity, growth, and innovation.”
A total of 60 technologies are expected to be ready for large-scale deployment in England’s healthcare sector within a year. There are plans to roll out these and other digital tools across 67 clinical areas, including radiology, cardiology and general practice.
Patients might not notice the changes when they visit a hospital or their GP, but they could soon be benefitting from a more personalized and informative care experience.
Top image: Orthopedic surgeons Justin Green and Mike Reed from the Northumbria Healthcare NHS Foundation Trust look at Microsoft’s Responsible AI Dashboard (Photo credit: Jonathan Banks)