Keeping the quality of care in check through advanced AI

By - World Healthcare Journal

Keeping the quality of care in check through advanced AI

Anywhere in the world, accidents happen. Technology wears out, unavoidable incidents will occur, people make mistakes - and the NHS isn’t exempt from that fact. However, cause for concern isn’t when things do go wrong – it’s what happens when they do. The consistency of a system such as the NHS, which is crucial to millions of lives, must remain at an exceptional standard.

The Care Quality Commission (CQC) performs the role of regulating and maintaining the standard of care in the UK and raises red flags where healthcare standards aren’t being met. As the independent regulator of health and social care for the NHS, the CQC receives hundreds of thousands of notifications of incidents at clinics which need to be seen, addressed, and fixed as soon as possible. But how can these reports be addressed in the best way?

WHJ speaks to Richard Oakley and Simon Swift of Methods Analytics about the expert AI system they are developing with the CQC to aid the processing of the mountains of data from across the NHS and private providers.

Richard is Director of Data Science & AI at Methods and a champion of data-led decision making. He’s been involved in a huge variety of projects across healthcare in his career, having previously worked in the NHS as a data scientist before moving to Methods.

Simon is the Managing Director of Methods Analytics, focused on improving the use of evidence-based analysis and information. Simon initially worked as a junior surgeon in the NHS for 8 years before retraining in Health Economics and Health Policy.

Auditing the NHS – no easy feat, no easy fix

Over the last decade, the CQC has had a massively expanded remit and responsibility. From being a relatively small body, working in a fairly straightforward manner, to overseeing the regulation of a huge quantity of highly specialised expert organisations is not simple. Just one hospital is comprised of several hundred staff with a huge number of complex interacting systems, plus varying seasonal and regional factors - and it’s down to the CQC to audit that effectively, fairly, and honestly, often in a tight time frame.

“It’s a very complicated system, and the CQC has been trying to step forward in terms of what they can do with their data,” says Richard.

“So they have internally challenged themselves to become a more data-oriented organisation, developing better internal processes and utilisation of the data that they already have, but also investing more into new technology, such as AI and expert systems. ”

The issues of independent regulation

The CQC understandably faces many of the expected (primarily people-based) problems which other independent regulators (e.g Ofsted) have to deal with. Putting individual inspectors on the ground to look at complex and intrinsically people-based systems is costly to do well. It also bestows a large amount of pressure and responsibility on the inspectors themselves.

“The inspectors have a hugely powerful role. Looking at how centres are rated – the top “outstanding” rating, for example – you can be pretty sure that a place is outstanding if it has that label on it. So, it’s important that they take that responsibility and power very seriously. The problem with all of these things, as usual, lies in the middle ground,” says Richard.

Furthermore, developing an expert AI system to process incident notifications will be a fruitless endeavour if the institutions themselves aren’t reporting properly. Creating a culture in the workplace where staff can feel comfortable enough to report their concerns without fear of blame is key to developing a good safety culture and improving the quality of care.

“For instance, if you have an organisation with a very open and strong reporting culture, the notifications from them are going to have a very different form and format compared to an organisation that has a poor reporting culture,” he adds.

The AI Expert System

Handling the regulation of individual systems which are vastly different and constantly evolving, while simultaneously assessing the likelihood of incidents happening again, and then dealing with them is very hard to do as a human being. To assist the CQC in handling the vast quantities of data that they manage on a day-to-day basis, Methods has developed a highly unique “AI pipeline” to sort, classify, and funnel the data that the CQC receives to the correct places. The AI pipeline functions as a way of automating the management of the high volume of free-text notifications and reports, so that notifications find the right people in the quickest way possible.

“We are building an AI pipeline that takes in this information which the machine then reads into a dataset. We have used a combination of this data and feedback from people in the CQC to build a classification system for that area of care,” says Simon.

“This is a way of structuring information, and information about information. We have used it to develop a formula to classify the content of all of these notifications in several ways. ”

“One of these ways is sentiment – is this a positive thing or a negative thing? We’re also wrapping around the notifications even more classification information: where did this happen? Who did it happen to? When was it reported? What members of staff were present? ”

“With this we can start to create themes. Using a combination of these thematic classifiers with the sentiment analysis, informed by the relationship across the entire historical data set of the relationship between these themes and event outcomes allows classification into higher and lower priority, based on our interactions with the data the CQC has provided. ”

Simply put, this AI provides the CQC with the ability to achieve outcomes that they couldn’t dream of achieving with traditional methods of data processing and management.

“Machines are really good at finding information that humans would find difficult. If a human was looking through these notifications in order, they’d be able to do the same analysis that the system can do. However, nobody has the time to read a notification, and then go back and look at all of the notifications for that organisation for the next 3 months and compare the data, unless they’re expecting to see a problem. But the computer can,” says Richard.

By incorporating expert AI systems into the process of regulation, and being able to identify more efficiently and productively why problems are arising, it’s not only money and thousands of hours that are saved, but lives as well.

A system not just for the UK, but for the world

The AI pipeline isn’t only a UK specific product, nor is it only pertinent to healthcare. As the system itself is built around free-text narrative, the system can be redesigned or repurposed for businesses, services and government potentially anywhere in the world.

“If another international healthcare regulator came to us we can adjust it for their environment. We can change the language that we use and tune the system for their sets of organisation and their areas of concern,” says Simon.

Another vital aspect that makes the AI expert system even more exportable is the functionality that it would retain in less developed systems. Because of its non-dependence on reports and notifications being formulated in a specific format, the AI can be put in place where electronic health records aren’t yet fully set up.

“The only thing that we would need to bolt on to the system is an optical recognition technology, and that’s already a very mature form of technology.

You could even put an Alexa on to it. “Alexa, I need to report an incident” for example,” says Richard.

Expert AI in the future - what next?

The development of AI has made vast changes to the way we live our lives. Advanced AI assistants are necessities in modern smartphones while AI devices such as Amazon’s Alexa have changed the way people approach many aspects of day-to-day life: shopping, entertainment, planning. And now we are seeing the real impact of AI on health and care systems.

In the UK, Health Secretary Matt Hancock announced a £250m investment to boost AI development within the NHS. It includes the creation of a national AI lab to create new systems and technologies to make care systems more efficient, diagnose conditions quicker and to achieve higher levels of accuracy.

These new technologies will also process data in new and innovative ways, and above all improve the patient experience. Method’s expert AI for the CQC isn’t the first time we’ll see advanced AI improving our health and care systems - and it definitely won’t be the last.


Simon Swift:

Richard Oakley:

#whjfeature #whjmethods #whjsimonswift #whjdata #whjdigitalhealth