Daniel J. Crespin, PhD; Jon B. Christianson, PhD; Jeffrey S. McCullough, PhD; and Michael D. Finch, PhD
Many economists hold the belief, first articulated by Kenneth Arrow, that competitive models have limitations in describing healthcare markets. Supporting this reasoning is imperfect information about outcomes, or quality, tied to specific services.1
Not only do patients typically lack information when selecting providers or treatments, but asymmetric information also applies across providers, who may be unaware of how their quality compares with that of competitors.2
These aspects create difficulties in compensating providers on value and therefore discourage them from competing on quality.3
Some large employers and analysts have advocated for increased retail competition to control medical care costs and improve quality of care. As summarized by Galvin and Milstein, “…providing consumers with compelling performance data and increasing their responsibility for the costs of care will slow the increase in health care expenditures and motivate clinicians to improve the quality and efficiency of their care.”4
Providers presumably would attempt to increase their performance relative to competitors to attract new patients, or retain existing ones, and to receive preferential treatment in health plan benefit designs that increase access to patients. Low-performing providers would be encouraged to catch up to high-performing providers, quality variation across providers would decrease, and quality throughout a market would rise.
Attempts to address information asymmetries by increasing the availability of performance information may not always result in a competitive effect.3
Providers may instead devote resources to other strategies effective in increasing revenues and patient flows. They may invest in new service lines or acquire physician practices to improve their bargaining position with payers. Some providers may attempt to improve performance by attracting healthier patients or avoiding patients who may be difficult to treat and contribute to lower performance. Furthermore, if providers believe that consumers do not use publicly available comparative quality information, such as provider report cards, when choosing physicians or hospitals, they might then be less likely to make quality improvement decisions with regard to the performance of competitors.
This study addresses whether health systems increased the quality of their clinics in response to their reported performance relative to competitors. Successful retail competition presumably would narrow the gap between low-performing and high-performing clinics.
The health systems and associated clinics in our study are located in Minnesota, a state dominated by a relatively small number of nonprofit integrated delivery systems.5
Minnesota Community Measurement (MNCM), a voluntary stakeholder collaborative, began an annual public reporting program for diabetes care at the clinic level starting with 2006 performance (reported in 2007). A 2008 Minnesota statute mandated reporting on a standardized set of quality measures,6
which ultimately included the diabetes care metrics reported by MNCM beginning with 2009 performance; however, there is no apparent penalty for not reporting. We identified 654 clinics (from 184 health systems and independent clinics) that offered diabetes care between 2006 and 2013, of which 572 reported their performance in at least 1 year. Diabetes performance measures have been publicly available in Minnesota for longer than in any other geographic area and, in a study involving 14 communities, Minnesota had the second-highest level of awareness of diabetes performance measures in 2012.7
Modeling Clinic Performance
We assumed that decisions regarding quality improvement, including whether or not a clinic submits reports, are made by health systems. This assumption is supported by the majority of clinics within a given health system beginning to report in the same year (eAppendix A Table 1
[eAppendices available at ajmc.com
]). Nevertheless, individual clinic characteristics other than competition measures may influence performance. Therefore, we took into account the health system’s competitive environment and individual clinic attributes.
Public reporting often is voluntary, and even mandated reporting may not result in 100% compliance. Estimating clinic performance using only clinics that submitted reports may lead to bias because of unobserved factors associated with both reporting and performance. To address this issue, we employed a Heckman selection model. In the first stage, we predicted clinic reporting status, allowing it to depend on prior-year reporting status, competitive environment, clinic characteristics, and the performance year.
In the second stage, we predicted clinic performance using a framework similar to that of Kolstad, which estimated how the performance of surgeons changed after obtaining information about competitors through report cards.2
This framework determines how providers respond to their relative performance—in our case, how much better (or worse) a clinic is compared with competitors—while controlling for patient volume to capture the response associated with patient demand. For both relative performance and patient volume, we used prior-year measures (ie, lagged) to reflect available information (eg, clinics had 2008 performance data in 2009) and to allow time to react to demand changes. Like the first-stage reporting model, performance varies by competitive attributes, clinic characteristics, and performance year. We used the results of the reporting model to control for selection into the sample of reporting clinics. (eAppendix B
provides a mathematical exposition.)
Market segmentation could affect performance and, therefore, our results. Some clinics may attract healthier patients or avoid difficult-to-treat ones to achieve higher performance that then would be attributable to changes in patient population rather than quality improvement. For example, some Medicaid patients are less adherent to medications, which could lead to worse clinical performance.8
To examine market segmentation, we re-estimated the model using patient volume as the dependent variable to determine whether volume differentially changed by clinics’ relative performance. If clinics of either relatively high or low performance differentially avoid difficult-to-treat patients whom they perceive will contribute to lower performance, then the results of this sensitivity analysis likely would find those clinics managing fewer Medicaid patients. If patients are not shifting between relatively high-performing and low-performing clinics, then it is unlikely that market segmentation influences our results.
Data and Measures
Our measure of performance is the Optimal Diabetes Care (ODC) score. The ODC score is the percentage of patients (aged 18-75 years) with diabetes who simultaneously achieve 5 treatment goals: (1) glycated hemoglobin (A1C) less than 7% (<8% starting with 2009 performance); (2) blood pressure less than 130/80 mm Hg (<140/80 mm Hg starting with 2010 performance); (3) low-density lipoprotein cholesterol less than 100 mg/dL; (4) daily aspirin use, unless contraindicated (includes only patients with ischemic vascular disease starting with 2009 performance); and (5) documented tobacco-free status.
We constructed indicators of relative performance and competitive environment. First, we calculated the mean ODC score of clinics in competing health systems. Each urban and rural health system’s competitors consisted of all clinics in the other systems within 5 miles and 25 miles, respectively, of any one of the system’s clinics. To measure relative performance, we subtracted each clinic’s competitor ODC measure (ie, mean ODC score of clinics in competing health systems) from its own ODC score. We created quintiles for this measure separately for urban and rural settings. Because we cannot measure relative performance for nonreporting clinics, we used the competitor ODC measure by itself in the reporting model. eAppendix C
provides further explanation and descriptive statistics for these measures. We also created measures for the percentage of clinics in competing health systems that submitted reports and the number of competing systems to additionally control for competitive environment.
The MNCM data include the annual number of patients with diabetes by clinic, which we used to control for responses associated with patient demand in the performance model. Beginning in 2009, these data include the number of patients enrolled in Medicare, Minnesota Health Care Programs (MHCP) (ie, Medicaid and other programs for low-income families and individuals), and private insurance, which we used to examine market segmentation.
We constructed explanatory variables for the reporting model from available single-year data sources. These measures are excluded from the performance model, in which we employed clinic fixed effects. We used 2009 American Community Survey 5-year estimates to create indicators of each clinic’s potential patient population. These indicators include the mean age of the population and the proportion of the population on any type of public assistance within 5 miles and 25 miles of each urban and rural clinic, respectively. We used a 2012 licensure data set to determine the number of physicians, percentage of specialists, mean physician age, and percentage of female physicians at each clinic. We determined federally qualified health center (FQHC) status and affiliation with a critical access hospital (CAH) through Web searches.
We estimated separate urban and rural models to account for location-driven differences in competition. In the first stage, we estimated reporting status using a probit regression. Our data suggest that many smaller clinics, and clinics perceived to have difficult-to-treat patient populations, were more likely to have delayed reporting until the mandate. Therefore, we interacted the 2009 performance year indicator with stand-alone clinic status, number of physicians, FQHC status, CAH affiliation, and the proportion of the potential patient population on public assistance. Because the mandate had a large influence on reporting decisions, we present average marginal effects on the probability of reporting over 3 periods: premandate (2007-2008), first mandate year (2009), and post mandate (2010-2013). In the second stage, we employed a fixed-effects model to estimate clinic performance that includes the inverse Mills ratio, obtained from the first-stage estimation. Because the fixed-effects model required 2 observations per clinic, we excluded any clinic that reported in only 1 year, including all clinics that began reporting with 2013 performance, resulting in an estimation sample of 288 urban and 244 rural clinics. We present average marginal effects of each explanatory variable on clinic performance.
Of 654 clinics providing diabetes care, 572 (87.5%) reported at least once between performance years 2006 and 2013. Urban clinics were more likely to be early reporters than rural clinics (Figure
): For 2006, 32.1% of urban clinics and 14.8% of rural clinics reported their performance. The number of clinics reporting increased through performance year 2009, the year reporting became mandated, and then leveled off. By 2013, approximately 80% of clinics reported. Those not reporting were smaller, independent clinics that often had a higher percentage of specialists than reporting clinics (Table 1
). Of clinics that never reported, more than half were stand-alone clinics.
Although a large improvement in publicly reported performance occurred between 2008 and 2010 (Figure), previous research using these data found that this increase is mainly attributable to changes in the definitions of measures for A1C, blood pressure, and daily aspirin, which made it easier for clinics to achieve the performance goal.9
Adjusting for these definition changes, performance improved modestly over the study period.9
Decision to Report
We present average marginal effects of the reporting model in Table 2
(coefficients in eAppendix A Table 2
). Overall, reporting was highly persistent. Urban clinics whose health systems faced higher-performing competitors were less likely to report than urban clinics with lower-performing competitors. Prior to the mandate, each increase of 1 percentage point in the mean ODC score of clinics in competing health systems was associated with a decrease of 0.74 (95% CI, 0.01-1.47) percentage points in the probability of reporting for urban clinics. Most clinics faced competition that was within 5 percentage points of what would, based on performance, be considered average competition (ie, the mean of the competitor performance measure across the sample), implying that relative to a clinic facing average competition, competitor performance typically affected the probability of reporting by less than 4 percentage points. This effect diminished over the study period, although it remained significant.
Diabetes Care Performance
presents the average marginal effects for the clinic performance model. These average effects apply to all clinics regardless of whether or not they reported (see eAppendix A Table 3
effects conditional on reporting). Although, on average, clinics improved over time, responses to competitor performance imply divergence between high-performing and low-performing clinics. Clinics that had performed much better than competitors in the prior year improved their performance in the following year more than clinics that had performed similarly to competitors, on average, by 1.90 (95% CI, 1.35-2.61) percentage points in urban areas and 1.35 (95% CI, 0.61-1.94) percentage points in rural areas. The divergence was greatest in urban areas, where clinics that had performed much better than competitors improved their ODC scores by 2.99 (95% CI, 1.96-4.05) percentage points more than clinics that had performed slightly worse than their competitors, and by 4.06 (95% CI, 2.54-5.96) percentage points more than clinics that had performed much worse than their competitors. These results imply that relatively high-performing clinics were improving faster than low-performing clinics.
We found no significant effect of patient volume on performance, suggesting that clinics did not increase their performance in response to greater or fewer patients in the prior year. The coefficient on the inverse Mills ratio (eAppendix A Table 3) for urban clinics was 0.33 (95% CI, 0.03-0.62), implying that reporting clinics had higher performance than nonreporting clinics. The inverse Mills ratio was not significant for rural clinics.
We only found significant associations between patient volume and relative performance in urban areas (Table 4
). Among all payers, clinics that had performed much better than competitors gained, on average, 16.1 (95% CI, 7.2-26.4) patients with diabetes compared with clinics that had performed similar to competitors, and a similar number of patients was gained by clinics performing slightly better than their competitors. Analyzing volume by payer (from 2009 onward), gains in volume were attributable only to privately insured patients. These results imply that high-performing clinics were attracting privately insured patients—a likely intended outcome of public reporting efforts aiming to shift patients to higher-quality clinics. If these patients were relatively healthy, then they potentially contributed to higher performance scores. However, neither relatively high-performing nor low-performing clinics were differentially avoiding MHCP patients, suggesting that the divergence in performance between clinics is not attributable to market segmentation of this more difficult-to-treat population.
We examined whether health systems respond to the performance of their competitors—a behavior expected under retail competition that could lead to quality improvements. Although diabetes care performance improved in Minnesota clinics during our study, clinics that outperformed competitors subsequently improved more than clinics that had performed worse than competitors, indicating a divergence between high-performing and low-performing clinics. This result suggests that public reporting did not incentivize health systems to improve their low-performing clinics in response to competing against high-performing clinics in other systems.
Our results differ from those of Kolstad, who found that surgeons improved their mortality rate if they were performing worse than expected after the introduction of report cards.2
However, differences between surgical mortality and diabetes outcomes likely limit the comparability of these findings. Compared with individual surgeons, health systems and their associated clinics may also have access to a variety of alternatives to increase revenues or patient flows when faced with publicly reporting performance. For example, they may acquire physician practices or invest in new service lines to attract patients, methods that are unlikely to be available to individual physicians.
Public reporting may encourage some health systems to take their first steps toward improving diabetes care quality, but these systems may lack the resources needed to develop sophisticated strategies focused on retail competition. Smith and colleagues found that several physician groups had little focus on diabetes care performance prior to reporting as part of the Wisconsin Collaborative for Healthcare Quality.10
These physician groups were likely to implement simple quality improvement strategies when they started reporting compared with the multiple intervention strategies of higher-performing physician groups. In our study, many clinics that were independent or from smaller systems did not begin reporting until it was mandated. Clinics that began reporting with the mandate scored 10 to 30 percentage points lower on the ODC score’s individual measures relative to clinics that began reporting earlier.9
Initial quality improvement efforts take time to execute, because they may include improvements in health information technology, changes in office procedures, and recruitment of quality improvement champions, among others. In this study, some health systems may have been undertaking these steps without yet realizing large gains in quality improvement. These smaller-system and stand-alone clinics also may lack resources needed to implement specific interventions needed to compete based on quality.11
Some health systems may not believe that public reporting ameliorates imperfect information between consumers and providers. A relatively small percentage of patients use public quality measures,7
and the evidence that public reports influence demand is mixed.12
High-performing clinics differentially attracted privately insured patients in our study, although these effects are likely relatively small in terms of total revenue, considering that patients with diabetes represent a fraction of patients. In addition, neither MHCP nor Medicare patients shifted from low-performing to high-performing clinics, therefore reinforcing concerns about the usefulness of public reporting for publicly insured patients, including the Medicare Payment Advisory Commission’s contention regarding quality reporting in the Merit-based Incentive Payment System.13
Health systems may believe that the current level of awareness and engagement is below the threshold needed to make investments in quality competition preferable to other uses of quality improvement resources. This explanation is supported by the evaluation of the Robert Wood Johnson Foundation’s Aligning Forces for Quality (AF4Q) initiative, in which MNCM participated. At the end of between 4 and 8 years of participating in AF4Q, community coalition leaders generally “…did not believe that the ‘competitive market strategy’…would improve provider quality or efficiency. In their experience, too few consumers sought out and used the information in this way…”14
One alternative to attracting patients and improving quality is mergers and acquisitions that allow for horizontal and vertical integration. During this study, larger health systems acquired several smaller systems and stand-alone clinics. These acquisitions were likely mutually beneficial, as they increased the market share of the larger health systems and improved performance at the acquired clinics.15
There is potential that competitor performance was mismeasured. Health systems may not view some clinics in their market areas as competitors. For example, a large integrated delivery system may not treat small independent clinics as competitors. However, in our study, a handful of large health systems dominate each market and stand-alone clinics comprise only 13% of clinics.
A complete model of competition ideally would incorporate price and quality information in relation to competitive responses. Although health systems may have attempted to adjust prices to gain bargaining power, MNCM did not report total cost measures until after the conclusion of this study.16
Health systems would have had little reason to adjust quality based on competitor pricing, as it is doubtful that consumer decisions would be based on prices without the appropriate information available.
The substantial presence in this study market of large health systems may raise questions about generalizability to areas where smaller health systems are more dominant. However, recent trends have shown an increase in mergers and acquisitions throughout the United States, making vertically integrated health systems and concentrated markets increasingly the norm.17-19
Unique aspects of the healthcare market make it difficult to reward and incentivize quality improvement as envisioned in the competitive market paradigm.1
Even when market information asymmetries were addressed through public reporting, we find that health systems did not compete on quality as proponents of retail competition intended. Although public reporting may incentivize quality gains in diabetes care management through other mechanisms, relying on it to promote retail competition among physicians on performance measures is unlikely to be an effective strategy.