Currently Viewing:
The American Journal of Managed Care April 2019
Time to Fecal Immunochemical Test Completion for Colorectal Cancer
Cameron B. Haas, MPH; Amanda I. Phipps, PhD; Anjum Hajat, PhD; Jessica Chubak, PhD; and Karen J. Wernli, PhD
From the Editorial Board: Kavita K. Patel, MD, MS
Kavita K. Patel, MD, MS
Comment on Generalizability of GLP-1 RA CVOTs in US T2D Population
Maureen J. Lage, PhD
Authors’ Reply to “Comment on Generalizability of GLP-1 RA CVOTs in US T2D Population”
Eric T. Wittbrodt, PharmD, MPH; James M. Eudicone, MS, MBA; Kelly F. Bell, PharmD, MSPhr; Devin M. Enhoffer, PharmD; Keith Latham, PharmD; and Jennifer B. Green, MD
Deprescribing in the Context of Multiple Providers: Understanding Patient Preferences
Amy Linsky, MD, MSc; Mark Meterko, PhD; Barbara G. Bokhour, PhD; Kelly Stolzmann, MS; and Steven R. Simon, MD, MPH
The Health and Well-being of an ACO Population
Thomas E. Kottke, MD, MSPH; Jason M. Gallagher, MBA; Marcia Lowry, MS; Sachin Rauri, MS; Juliana O. Tillema, MPA; Jeanette Y. Ziegenfuss, PhD; Nicolaas P. Pronk, PhD, MA; and Susan M. Knudson, MA
Effect of Changing COPD Triple-Therapy Inhaler Combinations on COPD Symptoms
Nick Ladziak, PharmD, BCACP, CDE; and Nicole Paolini Albanese, PharmD, BCACP, CDE
Deaths Among Opioid Users: Impact of Potential Inappropriate Prescribing Practices
Jayani Jayawardhana, PhD; Amanda J. Abraham, PhD; and Matthew Perri, PhD
Currently Reading
Do Health Systems Respond to the Quality of Their Competitors?
Daniel J. Crespin, PhD; Jon B. Christianson, PhD; Jeffrey S. McCullough, PhD; and Michael D. Finch, PhD
Does Care Consultation Affect Use of VHA Versus Non-VHA Care?
Robert O. Morgan, PhD; Shweta Pathak, PhD, MPH; David M. Bass, PhD; Katherine S. Judge, PhD; Nancy L. Wilson, MSW; Catherine McCarthy; Jung Hyun Kim, PhD, MPH; and Mark E. Kunik, MD, MPH
Continuity of Outpatient Care and Avoidable Hospitalization: A Systematic Review
Yu-Hsiang Kao, PhD; Wei-Ting Lin, PhD; Wan-Hsuan Chen, MPH; Shiao-Chi Wu, PhD; and Tung-Sung Tseng, DrPH

Do Health Systems Respond to the Quality of Their Competitors?

Daniel J. Crespin, PhD; Jon B. Christianson, PhD; Jeffrey S. McCullough, PhD; and Michael D. Finch, PhD
The authors determined whether Minnesota health systems responded to competitors’ publicly reported performance. Low performers fell further behind high performers, suggesting that reporting was not associated with quality competition.

Objectives: Some large employers and healthcare analysts have advocated for retail competition that relies on providers competing on performance metrics to improve care quality. Using publicly available performance measures, we determined whether health systems increased the quality of diabetes care provided by their clinics based on performance relative to competitors.

Study Design: Our analysis examined publicly reported performance measures of diabetes care from 2006 to 2013 for clinics in Minnesota health systems.

Methods: We obtained data for 654 clinics, of which 572 publicly reported diabetes care performance. Because some clinics did not report performance, we estimated a Heckman selection model. First, we predicted whether or not clinics reported performance. Second, we estimated the effect of relative performance (a clinic’s performance minus the mean performance of clinics in competing health systems) on clinic performance using the results of the reporting model to control for selection into the sample of reporting clinics.

Results: Although diabetes care performance improved during our study, health systems did not differentially improve the diabetes care performance of their clinics performing worse than clinics in competing systems. This result indicates divergence between high-performing and low-performing clinics. This result does not appear to be due to risk selection.

Conclusions: Publicly reporting quality information did not incentivize health systems to increase the performance of their clinics with lower performance than competitors, as would be expected under retail competition. Our results do not support strategies that rely on competition on publicly reported performance measures to improve quality in diabetes care management.

Am J Manag Care. 2019;25(4):e104-e110
Takeaway Points

Our results suggest that from 2006 to 2013, health systems in Minnesota did not compete on publicly reported diabetes care measures as envisioned under retail competition.
  • Health systems did not differentially improve the diabetes care quality of their clinics performing worse than those in competing systems. Low-performing clinics fell further behind high-performing clinics.
  • A variety of reasons, including a lack of consumer awareness of publicly reported performance measures, may dissuade low-performing health systems from focusing on quality competition.
  • These results do not support strategies of competition on public performance measures as a means of achieving quality gains in diabetes care management.
Many economists hold the belief, first articulated by Kenneth Arrow, that competitive models have limitations in describing healthcare markets. Supporting this reasoning is imperfect information about outcomes, or quality, tied to specific services.1 Not only do patients typically lack information when selecting providers or treatments, but asymmetric information also applies across providers, who may be unaware of how their quality compares with that of competitors.2 These aspects create difficulties in compensating providers on value and therefore discourage them from competing on quality.3

Some large employers and analysts have advocated for increased retail competition to control medical care costs and improve quality of care. As summarized by Galvin and Milstein, “…providing consumers with compelling performance data and increasing their responsibility for the costs of care will slow the increase in health care expenditures and motivate clinicians to improve the quality and efficiency of their care.”4 Providers presumably would attempt to increase their performance relative to competitors to attract new patients, or retain existing ones, and to receive preferential treatment in health plan benefit designs that increase access to patients. Low-performing providers would be encouraged to catch up to high-performing providers, quality variation across providers would decrease, and quality throughout a market would rise.

Attempts to address information asymmetries by increasing the availability of performance information may not always result in a competitive effect.3 Providers may instead devote resources to other strategies effective in increasing revenues and patient flows. They may invest in new service lines or acquire physician practices to improve their bargaining position with payers. Some providers may attempt to improve performance by attracting healthier patients or avoiding patients who may be difficult to treat and contribute to lower performance. Furthermore, if providers believe that consumers do not use publicly available comparative quality information, such as provider report cards, when choosing physicians or hospitals, they might then be less likely to make quality improvement decisions with regard to the performance of competitors.

This study addresses whether health systems increased the quality of their clinics in response to their reported performance relative to competitors. Successful retail competition presumably would narrow the gap between low-performing and high-performing clinics.


Study Setting

The health systems and associated clinics in our study are located in Minnesota, a state dominated by a relatively small number of nonprofit integrated delivery systems.5 Minnesota Community Measurement (MNCM), a voluntary stakeholder collaborative, began an annual public reporting program for diabetes care at the clinic level starting with 2006 performance (reported in 2007). A 2008 Minnesota statute mandated reporting on a standardized set of quality measures,6 which ultimately included the diabetes care metrics reported by MNCM beginning with 2009 performance; however, there is no apparent penalty for not reporting. We identified 654 clinics (from 184 health systems and independent clinics) that offered diabetes care between 2006 and 2013, of which 572 reported their performance in at least 1 year. Diabetes performance measures have been publicly available in Minnesota for longer than in any other geographic area and, in a study involving 14 communities, Minnesota had the second-highest level of awareness of diabetes performance measures in 2012.7

Modeling Clinic Performance

We assumed that decisions regarding quality improvement, including whether or not a clinic submits reports, are made by health systems. This assumption is supported by the majority of clinics within a given health system beginning to report in the same year (eAppendix A Table 1 [eAppendices available at]). Nevertheless, individual clinic characteristics other than competition measures may influence performance. Therefore, we took into account the health system’s competitive environment and individual clinic attributes.

Public reporting often is voluntary, and even mandated reporting may not result in 100% compliance. Estimating clinic performance using only clinics that submitted reports may lead to bias because of unobserved factors associated with both reporting and performance. To address this issue, we employed a Heckman selection model. In the first stage, we predicted clinic reporting status, allowing it to depend on prior-year reporting status, competitive environment, clinic characteristics, and the performance year.

In the second stage, we predicted clinic performance using a framework similar to that of Kolstad, which estimated how the performance of surgeons changed after obtaining information about competitors through report cards.2 This framework determines how providers respond to their relative performance—in our case, how much better (or worse) a clinic is compared with competitors—while controlling for patient volume to capture the response associated with patient demand. For both relative performance and patient volume, we used prior-year measures (ie, lagged) to reflect available information (eg, clinics had 2008 performance data in 2009) and to allow time to react to demand changes. Like the first-stage reporting model, performance varies by competitive attributes, clinic characteristics, and performance year. We used the results of the reporting model to control for selection into the sample of reporting clinics. (eAppendix B provides a mathematical exposition.)

Market segmentation could affect performance and, therefore, our results. Some clinics may attract healthier patients or avoid difficult-to-treat ones to achieve higher performance that then would be attributable to changes in patient population rather than quality improvement. For example, some Medicaid patients are less adherent to medications, which could lead to worse clinical performance.8 To examine market segmentation, we re-estimated the model using patient volume as the dependent variable to determine whether volume differentially changed by clinics’ relative performance. If clinics of either relatively high or low performance differentially avoid difficult-to-treat patients whom they perceive will contribute to lower performance, then the results of this sensitivity analysis likely would find those clinics managing fewer Medicaid patients. If patients are not shifting between relatively high-performing and low-performing clinics, then it is unlikely that market segmentation influences our results.

Copyright AJMC 2006-2020 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up