Currently Viewing:
The American Journal of Managed Care April 2019
Time to Fecal Immunochemical Test Completion for Colorectal Cancer
Cameron B. Haas, MPH; Amanda I. Phipps, PhD; Anjum Hajat, PhD; Jessica Chubak, PhD; and Karen J. Wernli, PhD
From the Editorial Board: Kavita K. Patel, MD, MS
Kavita K. Patel, MD, MS
Comment on Generalizability of GLP-1 RA CVOTs in US T2D Population
Maureen J. Lage, PhD
Authors’ Reply to “Comment on Generalizability of GLP-1 RA CVOTs in US T2D Population”
Eric T. Wittbrodt, PharmD, MPH; James M. Eudicone, MS, MBA; Kelly F. Bell, PharmD, MSPhr; Devin M. Enhoffer, PharmD; Keith Latham, PharmD; and Jennifer B. Green, MD
Deprescribing in the Context of Multiple Providers: Understanding Patient Preferences
Amy Linsky, MD, MSc; Mark Meterko, PhD; Barbara G. Bokhour, PhD; Kelly Stolzmann, MS; and Steven R. Simon, MD, MPH
The Health and Well-being of an ACO Population
Thomas E. Kottke, MD, MSPH; Jason M. Gallagher, MBA; Marcia Lowry, MS; Sachin Rauri, MS; Juliana O. Tillema, MPA; Jeanette Y. Ziegenfuss, PhD; Nicolaas P. Pronk, PhD, MA; and Susan M. Knudson, MA
Effect of Changing COPD Triple-Therapy Inhaler Combinations on COPD Symptoms
Nick Ladziak, PharmD, BCACP, CDE; and Nicole Paolini Albanese, PharmD, BCACP, CDE
Deaths Among Opioid Users: Impact of Potential Inappropriate Prescribing Practices
Jayani Jayawardhana, PhD; Amanda J. Abraham, PhD; and Matthew Perri, PhD
Currently Reading
Do Health Systems Respond to the Quality of Their Competitors?
Daniel J. Crespin, PhD; Jon B. Christianson, PhD; Jeffrey S. McCullough, PhD; and Michael D. Finch, PhD
Does Care Consultation Affect Use of VHA Versus Non-VHA Care?
Robert O. Morgan, PhD; Shweta Pathak, PhD, MPH; David M. Bass, PhD; Katherine S. Judge, PhD; Nancy L. Wilson, MSW; Catherine McCarthy; Jung Hyun Kim, PhD, MPH; and Mark E. Kunik, MD, MPH
Continuity of Outpatient Care and Avoidable Hospitalization: A Systematic Review
Yu-Hsiang Kao, PhD; Wei-Ting Lin, PhD; Wan-Hsuan Chen, MPH; Shiao-Chi Wu, PhD; and Tung-Sung Tseng, DrPH

Do Health Systems Respond to the Quality of Their Competitors?

Daniel J. Crespin, PhD; Jon B. Christianson, PhD; Jeffrey S. McCullough, PhD; and Michael D. Finch, PhD
The authors determined whether Minnesota health systems responded to competitors’ publicly reported performance. Low performers fell further behind high performers, suggesting that reporting was not associated with quality competition.
Decision to Report

We present average marginal effects of the reporting model in Table 2 (coefficients in eAppendix A Table 2). Overall, reporting was highly persistent. Urban clinics whose health systems faced higher-performing competitors were less likely to report than urban clinics with lower-performing competitors. Prior to the mandate, each increase of 1 percentage point in the mean ODC score of clinics in competing health systems was associated with a decrease of 0.74 (95% CI, 0.01-1.47) percentage points in the probability of reporting for urban clinics. Most clinics faced competition that was within 5 percentage points of what would, based on performance, be considered average competition (ie, the mean of the competitor performance measure across the sample), implying that relative to a clinic facing average competition, competitor performance typically affected the probability of reporting by less than 4 percentage points. This effect diminished over the study period, although it remained significant.

Diabetes Care Performance

Table 3 presents the average marginal effects for the clinic performance model. These average effects apply to all clinics regardless of whether or not they reported (see eAppendix A Table 3 for effects conditional on reporting). Although, on average, clinics improved over time, responses to competitor performance imply divergence between high-performing and low-performing clinics. Clinics that had performed much better than competitors in the prior year improved their performance in the following year more than clinics that had performed similarly to competitors, on average, by 1.90 (95% CI, 1.35-2.61) percentage points in urban areas and 1.35 (95% CI, 0.61-1.94) percentage points in rural areas. The divergence was greatest in urban areas, where clinics that had performed much better than competitors improved their ODC scores by 2.99 (95% CI, 1.96-4.05) percentage points more than clinics that had performed slightly worse than their competitors, and by 4.06 (95% CI, 2.54-5.96) percentage points more than clinics that had performed much worse than their competitors. These results imply that relatively high-performing clinics were improving faster than low-performing clinics.

We found no significant effect of patient volume on performance, suggesting that clinics did not increase their performance in response to greater or fewer patients in the prior year. The coefficient on the inverse Mills ratio (eAppendix A Table 3) for urban clinics was 0.33 (95% CI, 0.03-0.62), implying that reporting clinics had higher performance than nonreporting clinics. The inverse Mills ratio was not significant for rural clinics.

Market Segmentation

We only found significant associations between patient volume and relative performance in urban areas (Table 4). Among all payers, clinics that had performed much better than competitors gained, on average, 16.1 (95% CI, 7.2-26.4) patients with diabetes compared with clinics that had performed similar to competitors, and a similar number of patients was gained by clinics performing slightly better than their competitors. Analyzing volume by payer (from 2009 onward), gains in volume were attributable only to privately insured patients. These results imply that high-performing clinics were attracting privately insured patients—a likely intended outcome of public reporting efforts aiming to shift patients to higher-quality clinics. If these patients were relatively healthy, then they potentially contributed to higher performance scores. However, neither relatively high-performing nor low-performing clinics were differentially avoiding MHCP patients, suggesting that the divergence in performance between clinics is not attributable to market segmentation of this more difficult-to-treat population.


We examined whether health systems respond to the performance of their competitors—a behavior expected under retail competition that could lead to quality improvements. Although diabetes care performance improved in Minnesota clinics during our study, clinics that outperformed competitors subsequently improved more than clinics that had performed worse than competitors, indicating a divergence between high-performing and low-performing clinics. This result suggests that public reporting did not incentivize health systems to improve their low-performing clinics in response to competing against high-performing clinics in other systems.

Our results differ from those of Kolstad, who found that surgeons improved their mortality rate if they were performing worse than expected after the introduction of report cards.2 However, differences between surgical mortality and diabetes outcomes likely limit the comparability of these findings. Compared with individual surgeons, health systems and their associated clinics may also have access to a variety of alternatives to increase revenues or patient flows when faced with publicly reporting performance. For example, they may acquire physician practices or invest in new service lines to attract patients, methods that are unlikely to be available to individual physicians.

Copyright AJMC 2006-2020 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up