The quality imperative in health care

By:

May 19, 2015

Most disasters are single destructive events. But the most impactful disasters, particularly in developed countries like the US, happen discontinuously and systemically. Medical error and waste is estimated to factor in 400,000 premature annual deaths and cost the US health care system as much as a third of its total expenditure. The toll in human life-years is astonishing, but the cost issue is egregious as well – that’s the equivalent of taking 10 million brand-new Porsche 911’s and smashing them for fun, every year. To put it another way, US medical error and waste is greater than the net GDP of all but the top 15 countries.

Few disagree that the health care system needs a quality revolution. Since the Institute of Medicine’s 1999 report formalized the medical error epidemic, the mainstream solution is essentially to focus on improving clinical outcomes to contemporaneously reduce costs. But the disruptive theory – and early attempts at clinical quality control and pay-for-performance (P4P) models – suggests that we should proceed with caution. Here, we argue that quality controls and rigid outcomes only make sense for repeatable, rules based processes, but not when medicine is still in the empirical or intuitive realm.

First, it’s important to understand that many arguments for quality cost control stem from W. E. Deming’s cost/quality control success in manufacturing highlighted by the Toyota Production System. Toyota and its adopters transformed manufacturing by reducing waste, eliminating overproduction, and limiting inconsistency in building cars. Many principles of this “lean” methodology have successfully been applied to existing rules-based medicine (for example: labor, delivery, and postnatal care at Intermountain Health Care). But this continuous learning approach is very different from a static quality control system. Many attempts to correct medical errors and associated waste lose sight of this distinction.

Research in the early 2000s found correlations between hyperglycemia and in-hospital morbidity and mortality. Clinicians implemented tight glucose control measures in response. The rollout was a uniform flop. Randomized clinical studies with postoperative and ICU patients showed that patients with stricter blood sugar controls were actually less likely to survive than in cases where the physician was given more latitude.

The reason for this failure is that the “right” blood glucose levels for diabetic ICU patients aren’t precisely understood – they are still in the realm of empirical medicine, where good practices work for some patients, but not uniformly or with predictable effectiveness. Many other diseases, like coronary artery disease and obesity, fall into this camp. Their characterization is currently population-based and inherently probabilistic.

In health care, the problem lies in assuming we have a causal model for disease (precision medicine) when what we have in fact is only a probabilistic or experimental understanding largely based on correlations and symptomatic observation. Quality controls make sense when a disease is precisely defined and treated, but not when medicine is still in the empirical or intuitive realm.

The bottom line? We need to keep learning. In the late 19th and early 20th centuries, the US entered the modern age, yet infectious plagues still took horrific tolls on human life, particularly during the polio epidemic and WWI. Once the root causes of infectious disease became clear, scientists were able to develop differential treatment models that targeted pathogens. And they began to understand, in a generally applicable way, when and where these treatments should be used – and perhaps just as importantly, when they should not.

Originally, the practitioner was essentially gambling with the combination of treatments when treating an infectious disease. Today, proper quality control around clinical adherence – for example, vaccinations at the hospital for a newborn baby – completely prohibits many of those same diseases. It now costs much less to deliver much more predicable outcomes for many infectious diseases.

Improving performance outcomes also requires a precise understanding of treatment processes, but because of the indefinite nature of most intuitive medicine, this is a difficult task. The HHS consequently faces long odds in fixing performance metrics for ACOs and other P4P models that accurately reflect this reality. Improving clinical outcomes in ACOs via P4P has proven uncertain at best. Because of the indefinite nature of many diseases and associated comorbidities managed in a tertiary care system, assigning rigid performance metrics to treatment patterns that are still in the intuitive or empirical realms is a mistake. It is much easier and advisable to measure performance around repeatable processes that are completely understood and for whom rules-based treatments exist. Then, quality control measures can reduce waste and improve efficiency, particularly if they are moved out of the general hospital into a less complex ambulatory setting.

In the mean time, we run several risks by imposing performance metrics and associated quality controls prematurely. One is that controls may, in fact, make quality outcomes worse. Another is that resources will be become overly focused on optimizing around measured outcomes, at the expense of other important activities. The final issue with prematurely assigning performance metrics is their reliance on existing data. Solutions to our cost, quality, and access problems will come via disruptive innovations, new market solutions for which no data exists yet.

For these reasons, it is important to keep the big picture in mind. Solving the medical waste and error crisis is paramount for every healthcare executive, practitioner, policy maker, and entrepreneur. Therefore, solutions that address current processes will only address part of the puzzle. Disruptive innovations will be key to solving the rest.