This study empirically examines whether market competition is a potential driver of hospital performance on the key evidence-based Joint Commission heart-failure quality indicators.
To investigate whether market competition is a potential driver of hospital performance on the key evidence-based Joint Commission heart-failure (HF) quality indicators of angiotensin-converting enzyme inhibitor/angiotensin receptor blocker prescribed, left ventricular function assessment, smoking-cessation counseling, and discharge instructions.
Retrospective multivariate analysis.
Hospital performance data for HF was obtained from The Joint Commission’s ORYX program from 2003 to 2006. The performance data were linked with hospital characteristics from the American Hospital Association Annual Survey and area-level sociodemographic information from the Area Resource File. Healthcare markets
were defined as hospital referral regions (HRRs) and market competition intensity was defined by the Herfindahl-Hirschman Index. Hospital-level and HRR-level ordinary least squares fixed effects regression models were used to estimate the relationship between market competition and performance.
A paired comparison indicated that there was a significant change in the mean hospital-level performance over time on all of the HF quality indicators. From the multivariate analyses, hospitals in the least competitive markets (Quintile 5) performed slightly better (2.9%) than the most competitive markets (Quintile 1) for left ventricular
function assessment (P <.01). At the HRR level, however, the least competitive markets (Quintile 5) performed moderately worse (5.1%) on the discharge-instructions quality indicator compared with the most competitive markets (Quintile 1) (P = .05).
Market competition intensity was associated with only small differences in hospital performance. The level of market competitiveness may produce only marginal incremental benefits to inpatient HF care.
(Am J Manag Care. 2011;17(12):816-822)
This study empirically examines whether market competition is a potential driver of hospital performance on The Joint Commission heart-failure quality indicators.
The significant chasm between the quality of care that heart failure (HF) patients should receive and actually receive has been widely documented. Previous studies have reported on the extensive variation in the treatment and management of HF patients in hospitals across the country.1,2 Because of the substantial geographic practice variations and underuse of appropriate HF therapies, payers and accrediting bodies have begun to measure hospital performance as a way to stimulate quality-improvement efforts. The Joint Commission now requires that hospitals submit information on their performance for the core conditions of heart attacks, HF, pneumonia, and others as part of the ORYX program, which is publicly reported on Quality Check (http://www.qualitycheck.org).2
ORYX was first developed by The Joint Commission in 1997 as a way to integrate performance and outcomes measures into a continuous accreditation process.2 The ORYX quality indicators are aligned with the Centers for Medicare & Medicaid Services (CMS) Hospital Compare performance measures. Since the ORYX program was initiated, there have been substantial improvements in hospital quality, although there remains a wide variation in performance across individual hospitals and states.3
Despite the enormous progress that has been made to narrow the quality gap, the underlying motivation for hospitals to increase their compliance with the standardized quality indicators and the reasons for the wide heterogeneity in performance across hospitals and geographic areas is unclear. Previous research suggests that hospitals might be motivated to act on publicly reported performance data due to market competition, professional standards, and/or to preserve or enhance their reputation.4 In this study, we sought to empirically test whether market competition is a potential driver of hospital performance on the key evidence- based Joint Commission HF quality indicators. We focus specifically on HF because it is one of the core conditions that are measured by The Joint Commission and it is a common and costly chronic condition.
Pathways to Performance on Quality Indicators
There are 3 possible pathways through which hospitals might be motivated to act on publicly reported quality indicators.4 Hospitals may be motivated to improve their performance on quality indicators due to market forces because they would like to hold on to or increase their market share of patients. Hospitals’ awareness of quality deficits might also be enough to stimulate quality improvement efforts because of professional standards. Lastly, hospitals may be motivated to improve quality because they are concerned about protecting or enhancing their public image, since consumers may form certain opinions about a hospital.4 In this study, we focus on market competition as a potential driver of hospital performance. We hypothesize that if hospitals compete on quality, then we expect hospitals in competitive markets will provide a higher level of quality and hospitals in concentrated (less competitive) markets will provide a lower level of quality.
DATA AND METHODS
The Joint Commission accounts for more than 3000 hospitals that represent approximately 80% of hospitals in the United States and comprises more than 90% of all acutecare hospital beds.2 Quarterly data from The Joint Commission’s ORYX hospital performance measurement program for HF from 2003 to 2006 was used. The 4 evidence-based HF quality indicators examined were (1) angiotensin-converting enzyme inhibitor (ACEI) or angiotensin receptor blocker (ARB) prescribed at discharge, (2) left ventricular (LV) function assessment, (3) smoking-cessation counseling, and (4) discharge instructions. Only yearly data points for the ACEI or ARB quality indicator in 2003 and 2004 were available, with the exception of Quarters 1 and 2 of 2003, since the definition for this quality indicator changed in 2005 to include ARB. A total of 3011 non-Federal, shortstay, Joint Commission—accredited acute-care hospitals were used as the primary units of analysis over 16 quarters (n = 48,176), with a secondary analysis that aggregated hospitals to 306 hospital referral regions (HRRs) over 16 quarters (n = 4896). Patients 18 years and older with LV systolic dysfunction HF defined by International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes 402.01, 402.11, 402.91, 404.01, 404.03, 404.11, 404.13, 404.91, 404.93, and 428.XX, who were admitted to Joint Commission— accredited hospitals and who met the reporting requirements, comprised the data elements for the individual HF quality indicators.
The ORYX data were linked with hospital characteristics data from the American Hospital Association Annual Survey and sociodemographic and market-level characteristics from the Area Resource File to serve as control variables. Market share was derived from the Medicare Provider and Analysis Review (MedPAR) file based on the total number of Medicare HF patients.
As part of our secondary analysis, we aggregated hospitals to the HRR level to mitigate the potential issue of the small number of cases at a particular hospital that may adversely impact a hospital’s performance, as well as the endogeneity problem of patient selection of hospitals based on perceived or other unobservable characteristics that may be correlated with quality of care. A fixed-effects approach was used to examine within unit changes over time so that the results would not be affected by any heterogeneous differences across HRRs that might be constant over time.
We used HRRs to construct the measure of the healthcare market according to the crosswalk methodology of the Dartmouth Atlas of Health Care. HRRs are a naturally occurring healthcare market and they represent a geographic area where a significant proportion of medical care is provided by a referral hospital(s) serving an entire region.5
A quarterly measure of market competition was constructed based on the Herfindahl-Hirschman Index (HHI). The HHI is a standard measure of market competition and it has been widely applied for hospitals in HRRs. The HHI measure was created using Medicare HF patient volume derived from the MedPAR data file. Hospital market share was calculated as the total number of Medicare HF patients at a hospital divided by the total number of Medicare HF patients within an HRR, and it was scaled by 100.6 The HHI was then determined by taking the sum of the square of market shares for Medicare HF patients for all hospitals within an HRR.7,8 The HHI ranges from 0, which represents an infinitesimally small number of competitors in a market, to 10,000, which represents a monopoly. In our analyses, however, we used quintiles of HHI where Quintile 1 represents the most competitive markets and Quintile 5 represents the least competitive markets.8
Heart Failure Quality Metrics
Hospital-level performance on the quality indicators was used in the primary analyses. For the HRR-level analyses, the weighted average of performance based on the total number of eligible HF patients at each hospital for the quality indicators (ACEI or ARB prescribed at discharge, LV function assessment, smoking-cessation counseling, and discharge instructions) was calculated. By using the HRR-level weighted average of hospital performance, hospitals in a market that had a higher volume of HF patients had their performance score weighted more highly than hospitals that had a lower volume of HF patients.
We first examined descriptive statistics on the hospitallevel performance and market intensity data. We then used a paired t test to assess the change in the mean hospital-level performance on the HF quality indicators from Quarter 1 of 2003 to Quarter 4 of 2006. We specified hospital-level ordinary least squares (OLS) fixed-effects regression models with Quintile 1 (most competitive markets) serving as our reference group to estimate the relationship between market competition and performance and account for the time invariant—omitted variables. We lagged market competition by one quarter in our models because we would reasonably expect a delayed response between the time when the HF performance data are made publicly available and the time when the effect of market competition occurs. Additionally, the temporal sequence of the lagged model would make it less likely for an external shock in the future to affect past performance.
A set of time indicator variables for the quarter of the study was included in the regression models, with the first quarter serving as the reference. We controlled for a number of hospital characteristics including: bed size, teaching status, profit status, presence of a cardiac intensive-care unit, percentage Medicare, percentage Medicaid, and number of nurses to 1000 patient-days. Market-level sociodemographic variables of gender, race, age, per capita income, unemployment rate, number of physicians, and health maintenance organization penetration rate were included. We also included in our models the volume of HF patients and total admissions to control for the quality-volume relationship. We adjusted for the hospital market share of patients from the first quarter to control for the potential shifting of market shares over time. The regression models were specified for hospital-level fixed effects and clustering by HRR to take into account the correlation of the HRR-level independent variables. The multivariate analyses also specified for robust standard errors.
We also explored if the rate of adherence to the HF quality indicators varied over time by market competition intensity whereby competitive markets might conform
more quickly to the HF quality indicators because of market forces and competitive pressure. Therefore, a time by lagged market-competition interaction variable was later included in the regression models.
The models were repeated as HRR-level fixed effects to account for the potential endogeneity problem of patient selection of hospitals based on perceived quality of care. In the HRR-level analyses, the market-weighted average of the covariates was included for each HRR.
shows the change in average hospital performance for the HF quality indicators, and average number of eligible patients by quality indicator per hospital. The largest increase in average hospital performance over the study period was observed for the behavioral measures of smoking-cessation counseling (45%) and discharge instructions (38%), while smaller progress was made on the clinical measures of ACEI/ARB prescribed at discharge and on LV function assessment.
The shows the distribution of markets by the level of market competition intensity at the first and last quarters of the study. The distribution shifted slightly over time as markets became slightly less competitive.
A paired t-test was conducted to examine the changes in average performance over time. The paired comparisons for each of the quality indicators were found to be highly significant (P <.001), indicating that there was a significant change in the mean hospital-level performance (Table 1).
Hospital-Level Multivariate Analyses
The results from the adjusted multivariate analyses indicate that the least competitive markets (Quintile 5) performed about 2.9% better than the most competitive markets (Quintile 1) for LV function assessment (P <.01). The second least competitive markets (Quintile 4) also performed about 1.9% better than the most competitive markets (Quintile 1) for LV function assessment (P = .05). We did not find any effects of market competition and hospital-level performance for the other HF quality indicators. Including a time by lagged market-intensity interaction variable into the regression models did not yield any significant results (Table 2).
HRR-Level Multivariate Analyses
The OLS fixed effects regression models that estimated the relationship between market competition and hospital-level performance were also repeated at the HRR level to assess whether or not the results may differ because of the potential issue of the small number of cases at any individual hospital and the endogeneity of patient selection. At the HRR level, we found that the least competitive markets (Quintile 5) performed about 5.1% worse for smoking-cessation counseling compared with the most competitive markets (Quintile 1) (P= .05) (See ).
In this study, we examined whether market competition was a potential driver of hospital performance on the HF quality indicators. From economic theory, it is assumed that hospitals operate in a competitive marketplace and are subjected to market forces. Previous studies have suggested that market competition may be welfare enhancing and lead to better quality of care.9,10 At the hospital level, we found a small difference between the least competitive markets (Quintiles 4 and 5) and most competitive markets (Quintile 1) for LV function assessment, where the least competitive markets performed slightly better. At the HRR level, however, the least competitive markets (Quintile 5) performed moderately worse than the most competitive markets on the discharge instructions quality indicator. The slightly better performance for discharge instructions by the most competitive markets at the HRR level may be due to market forces. A possible reason that market competition did not have a stronger effect on the HF quality indicators overall could be that hospitals might be engaged in greater competition along “price” and other “non-price” dimensions, which may result in decreased processes of care.
Hospital behavior and performance are also partly based on patient preferences.11 The public release of performance data was intended to inform consumers when they select a hospital. The lack of a stronger relationship between market competition and hospital performance on the evidence-based HF quality indicators may stem from the fact that patients may not necessarily be using The Joint Commission’s Quality Check reports to guide their healthcare decision-making. Consumers may falsely presume that quality of care is uniformly high across hospitals, or they may be unaware of the wide quality variation among hospitals.12 Despite the efforts to inform “consumer choice” and redress information asymmetries regarding quality of care, patients have not necessarily shifted their response toward using higher performing hospitals.13-15 Patients tend to select hospitals based on past experiences, physician affiliation, or anecdotal evidence.14
Furthermore, The Joint Commission’s ORYX initiative was designed to provide hospitals with feedback on their performance and to stimulate quality improvement efforts by publicly reporting performance data. In this study, we find that hospitals across all levels of market competition intensity significantly increased their rate of adherence to the HF quality indicators over time. Not surprisingly, we find that hospitals performed better on the clinical measures of LV function assessment and ACEI/ARB, which are more discernible by patients and are more closely associated with inpatient HF care. Hospitals may also have an underlying incentive to increase their conformance with the HF quality indicators, because they must demonstrate consistent steps to incrementally improve quality of care over time in order to maintain Joint Commission accreditation.
In all of the multivariate models, the time indicator variables were highly significant and explained a substantial proportion of the variance, which suggests that, on average, hospitals were engaged in substantial quality-improvement efforts over time either through an actual increase in performance or through better documentation.
This study has several limitations. First, hospitals that do not seek Joint Commission accreditation were not included; these hospitals are typically smaller with lower patient volume. Secondly, hospital performance data are self-reported which may pose as a potential source of bias. A previous study, however, found the reliability of the ORYX data to be high.16Another study that examined the CMS Hospital Compare performance measures did not find any instances of top coding or exception reporting of HF patients.17 Further, the use of a single-market competition intensity measure (HHI) may not capture the heterogeneity across markets such as hospitalsystem affiliation or patient mobility. It is also possible that other unobservable factors, such as payer mix, hospital financial condition, information technology infrastructure, and changes in nurse staffing, might explain additional variation in hospital performance. Hospital-based quality improvement initiatives, such as the American Heart Association’s Get With the Guidelines, may have increased provider adherence to the quality indicators over time.18-20 Lastly, we only focused on a single condition; while HF is one of the most common reasons for hospitalization, there are other important areas of inpatient care that could be examined.
In spite of these limitations, the fixed effects model and national sample used in our study permits us to have stronger causal inferences and high external validity. Our study adds to the existing literature by further examining whether quality improvements made by hospitals based on public performance reports are occurring through the mechanism of market competition. Another contribution is that we used data on performance measures that included all patients 18 years and older who were treated for HF at a Joint Commission—accredited hospital and were eligible for the quality indicator.
In conclusion, we find that the level of market competition is associated with only modest differences in hospital performance. Market competition might be a blunt instrument and it may not be the most suitable policy tool to drive hospital quality-improvement efforts. If the policy goal is to stimulate hospital quality improvement on processes of care, public reporting programs alone might accomplish this goal, as hospitals substantially improved their performance on the evidence-based HF quality indicators over time. Alternative policy solutions should be considered to facilitate meaningful efforts to drive quality-improvement initiatives.
The authors would like to acknowledge Benn Greenspan, PhD, Donald Hedeker, PhD, Ross Mullner, PhD, Scott Williams, PsyD, Vivian Ho, PhD, and Jeffrey McCullough, PhD, for their thoughtful comments and feedback. The authors would also like to acknowledge The Joint Commission for providing the ORYX hospital performance data.
Author Affiliations: From Thomson Reuters, Healthcare Business Section, Analytic Consulting and Research Services (JLKM), Washington, DC; Division of Health Policy and Administration (ATL), University of Illinois at Chicago, Chicago, IL.
Funding Source: This study was sponsored by the Agency for Healthcare Research and Quality (AHRQ) R36 HS17944 grant and the WC and May Preble Deiss Award.
Author Disclosures: The authors (JLKM, ATL) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (JLKM, ATL); acquisition of data (JLKM); analysis and interpretation of data (JLKM, ATL); drafting of the manuscript (JLKM, ATL); critical revision of the manuscript for important intellectual content (JLKM, ATL); statistical analysis (JLKM); obtaining funding (JLKM); and supervision (JLKM).
Address correspondence to: Jared Lane K. Maeda, PhD, MPH, 4301 Connecticut Ave NW, Ste 330, Washington, DC 20008. E-mail: firstname.lastname@example.org.
1. Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals—the Hospital Quality Alliance program. N Engl J Med. 2005;353(3):265-274.
2. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002- 2004. New Engl J Med. 2005;353(3):255-264.
3. Joint Commission. Improving America’s Hospitals: A Report on Quality and Safety. Oakbrook Terrace, IL: Joint Commission; 2009.
4. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood). 2003;22(2):84-94.
5. Dartmouth Atlas of Health Care. Appendix on Geography of Health Care in the United States. 1999. http://www.dartmouthatlas.org/downloads/methods/geogappdx.pdf. Accessed February 23, 2008.
6. Zwanziger J, Melnick GA, Mann JM. Measures of hospital market structure: a review of the alternatives and a proposed approach. Socioecon Plann Sci. 1990;24(2):81-95.
7. Wong H, Zhan C, Mutter R. Do different measures of hospital competition matter in empirical investigations of hospital behavior? Rev Ind Organ. 2005;26(1):61-87.
8. US Department of Justice. Herfindahl-Hirschman Index. http://www.justice.gov/atr/public/testimony/hhi.htm. Accessed March 3, 2010.
9. Kessler DP, Geppert JJ. The effects of competition on variation in the quality and cost of medical care. J Econ Manage Strategy. 2005;14(3): 575-589.
10. Gaynor M, Moreno-Serra R, Propper C. Death by market power: reform, competition, and patient outcomes in the national health service. NBER Working Paper. 2010;16164.
11. Luft HS, Robinson JC, Garnick DW, Maerki SC, McPhee SJ. The role of specialized clinical services in competition among hospitals. Inquiry. 1986;23(1):83-94.
12. Hibbard JH. What can we say about the impact of public reporting? inconsistent execution yields variable results. Ann Intern Med. 2008; 148(2):160-161.
13. Hibbard JH, Stockard J, Tusler M. Hospital performance reports: impact on quality, market share, and reputation. Health Aff (Millwood). 2005;24(4):1150-1160.
14. Schneider EC, Lieberman T. Publicly disclosed information about the quality of health care: response of the US public. Qual Health Care. 2001;10(2):96-103.
15. Baker DW, Einstadter D, Thomas C, Husak S, Gordon NH, Cebul RD. The effect of publicly reporting hospital performance on market share and risk-adjusted mortality at high-mortality hospitals. Med Care. 2003;41(6):729-740.
16. Williams SC, Watt A, Schmaltz SP, Koss RG, Loeb JM. Assessing the reliability of standardized performance indicators. Int J Qual Health Care. 2006;18(3):246-255.
17. Ryan AM, Burgess JF Jr, Tompkins CP, Wallack SS. The relationship between Medicare’s process of care quality measures and mortality. Inquiry. 2009;46(3):274-290.
18. Heidenreich PA, Lewis WR, LaBresh KA, Schwamm LH, Fonarow GC. Hospital performance recognition with the Get With the Guidelines Program and mortality for acute myocardial infarction and heart failure. Am Heart J. 2009;158(4):546-553.
19. Peterson PN, Rumsfeld JS, Liang L, et al. Treatment and risk in heart failure: gaps in evidence or quality? Circ Cardiovasc Qual Outcomes. 2010; 3(3):309-315.
20. Ambardekar AV, Fonarow GC, Hernandez AF, Pan W, Yancy CW, Krantz MJ. Characteristics and in-hospital outcomes for nonadherent patients with heart failure: findings from Get With the Guidelines-Heart Failure (GWTG-HF). Am Heart J. 2009;158(4):644-652.