Sepsis is poorly understood, difficult to identify, and even harder to predict. Consistent stakeholder involvement may be key to identifying precisely where and how a sepsis early warning system could improve the team-based response to the condition.
As an academic critical care physician and clinical informatician, believe me when I say that sepsis is a hard problem to solve. Sepsis is poorly understood, difficult to identify, and even harder to predict. For a routine killer, it is only ever obvious in hindsight. Its complexity has lured researchers and data scientists into developing dozens of advanced sepsis early warning systems.
Unfortunately, these innovators are waging an uphill battle not only against a complex killer, but also against confirmation bias that leads both the media and health care professionals to assume the solutions won’t work.
Sales pitches from vendors across the artificial intelligence (AI) landscape are often peppered with lofty promises but short on robust data to support their claims. Developers are naturally focused on the art and science of sophisticated model-building and tend to have less insight into exactly how such a model could be used to make a difference in the real world.
Anyone who has followed the IBM Watson experiment knows exactly what happens when complex environments, tech solutions, and overpromising collide.
Lessons from the Watson era are, however, easy to overread. As we’ve seen with the failure of “health care takeover” efforts from Amazon, Google, and others, technology companies struggle when they attempt to apply what they know into an entirely new area.
That may be exactly why journalists were quick to latch on to recent reports questioning the efficacy of a sepsis early warning system developed by Epic. It seemed to confirm what they already believe about the utility of AI in advancing health care. But in their eagerness to point to yet another AI failure, they’ve overlooked the limitations of the validation study in question, ignoring several examples of how the system has already been used to improve sepsis-related outcomes.
Validation studies help us understand how a model performs in a vacuum, but they don’t reflect how a tool like a sepsis early warning system helps the care team in real practice. Early warning systems were designed to augment a clinician’s judgment, not replace it, and validation studies may not reflect this well. This differs from observational studies, which measure the model’s impact in real-world use.
The “solution” to sepsis, as it is with most things in health care, is consistently and efficiently applying what we know works. A whole field of research—implementation science—is built on this very notion. Experienced implementation scientists know that no matter how good a tool might be, process and people are the cornerstones to positive reform.
In our implementation of the Epic sepsis early warning system, we focused on understanding every step and player in the sepsis-response process.
To test the model’s potential early returns, we used a randomized quality improvement cycle. Over a period of 5 months, approximately 600 patients were assigned to 1 of 2 groups who received either standard care for sepsis or had that care augmented by the automated early warning system.
Five months into the study, it became clear that it would be unethical to not use the sepsis early warning system with all emergency department patients. The system was providing pharmacist notifications that shortened time to antibiotic administration and improved patient outcomes, without burdening the care team with excessive notifications.
Consistent stakeholder involvement was key to identifying precisely where and how a sepsis early warning system could improve the team-based response to sepsis. The model was merely the tool that brought the team together in a timely fashion. It is demonstrably the case that AI can be leveraged in a way that positively affects sepsis-related outcomes. But ever so important is the how.
Yasir Tarabichi, MD, MSCR, is the director of Research Informatics at MetroHealth and an assistant professor of medicine at Case Western Reserve University.