Currently Viewing:
The American Journal of Managed Care April 2019
Time to Fecal Immunochemical Test Completion for Colorectal Cancer
Cameron B. Haas, MPH; Amanda I. Phipps, PhD; Anjum Hajat, PhD; Jessica Chubak, PhD; and Karen J. Wernli, PhD
From the Editorial Board: Kavita K. Patel, MD, MS
Kavita K. Patel, MD, MS
Comment on Generalizability of GLP-1 RA CVOTs in US T2D Population
Maureen J. Lage, PhD
Authors’ Reply to “Comment on Generalizability of GLP-1 RA CVOTs in US T2D Population”
Eric T. Wittbrodt, PharmD, MPH; James M. Eudicone, MS, MBA; Kelly F. Bell, PharmD, MSPhr; Devin M. Enhoffer, PharmD; Keith Latham, PharmD; and Jennifer B. Green, MD
Deprescribing in the Context of Multiple Providers: Understanding Patient Preferences
Amy Linsky, MD, MSc; Mark Meterko, PhD; Barbara G. Bokhour, PhD; Kelly Stolzmann, MS; and Steven R. Simon, MD, MPH
The Health and Well-being of an ACO Population
Thomas E. Kottke, MD, MSPH; Jason M. Gallagher, MBA; Marcia Lowry, MS; Sachin Rauri, MS; Juliana O. Tillema, MPA; Jeanette Y. Ziegenfuss, PhD; Nicolaas P. Pronk, PhD, MA; and Susan M. Knudson, MA
Effect of Changing COPD Triple-Therapy Inhaler Combinations on COPD Symptoms
Nick Ladziak, PharmD, BCACP, CDE; and Nicole Paolini Albanese, PharmD, BCACP, CDE
Deaths Among Opioid Users: Impact of Potential Inappropriate Prescribing Practices
Jayani Jayawardhana, PhD; Amanda J. Abraham, PhD; and Matthew Perri, PhD
Currently Reading
Do Health Systems Respond to the Quality of Their Competitors?
Daniel J. Crespin, PhD; Jon B. Christianson, PhD; Jeffrey S. McCullough, PhD; and Michael D. Finch, PhD
Does Care Consultation Affect Use of VHA Versus Non-VHA Care?
Robert O. Morgan, PhD; Shweta Pathak, PhD, MPH; David M. Bass, PhD; Katherine S. Judge, PhD; Nancy L. Wilson, MSW; Catherine McCarthy; Jung Hyun Kim, PhD, MPH; and Mark E. Kunik, MD, MPH
Continuity of Outpatient Care and Avoidable Hospitalization: A Systematic Review
Yu-Hsiang Kao, PhD; Wei-Ting Lin, PhD; Wan-Hsuan Chen, MPH; Shiao-Chi Wu, PhD; and Tung-Sung Tseng, DrPH

Do Health Systems Respond to the Quality of Their Competitors?

Daniel J. Crespin, PhD; Jon B. Christianson, PhD; Jeffrey S. McCullough, PhD; and Michael D. Finch, PhD
The authors determined whether Minnesota health systems responded to competitors’ publicly reported performance. Low performers fell further behind high performers, suggesting that reporting was not associated with quality competition.
Data and Measures

Our measure of performance is the Optimal Diabetes Care (ODC) score. The ODC score is the percentage of patients (aged 18-75 years) with diabetes who simultaneously achieve 5 treatment goals: (1) glycated hemoglobin (A1C) less than 7% (<8% starting with 2009 performance); (2) blood pressure less than 130/80 mm Hg (<140/80 mm Hg starting with 2010 performance); (3) low-density lipoprotein cholesterol less than 100 mg/dL; (4) daily aspirin use, unless contraindicated (includes only patients with ischemic vascular disease starting with 2009 performance); and (5) documented tobacco-free status.

We constructed indicators of relative performance and competitive environment. First, we calculated the mean ODC score of clinics in competing health systems. Each urban and rural health system’s competitors consisted of all clinics in the other systems within 5 miles and 25 miles, respectively, of any one of the system’s clinics. To measure relative performance, we subtracted each clinic’s competitor ODC measure (ie, mean ODC score of clinics in competing health systems) from its own ODC score. We created quintiles for this measure separately for urban and rural settings. Because we cannot measure relative performance for nonreporting clinics, we used the competitor ODC measure by itself in the reporting model. eAppendix C provides further explanation and descriptive statistics for these measures. We also created measures for the percentage of clinics in competing health systems that submitted reports and the number of competing systems to additionally control for competitive environment.

The MNCM data include the annual number of patients with diabetes by clinic, which we used to control for responses associated with patient demand in the performance model. Beginning in 2009, these data include the number of patients enrolled in Medicare, Minnesota Health Care Programs (MHCP) (ie, Medicaid and other programs for low-income families and individuals), and private insurance, which we used to examine market segmentation.

We constructed explanatory variables for the reporting model from available single-year data sources. These measures are excluded from the performance model, in which we employed clinic fixed effects. We used 2009 American Community Survey 5-year estimates to create indicators of each clinic’s potential patient population. These indicators include the mean age of the population and the proportion of the population on any type of public assistance within 5 miles and 25 miles of each urban and rural clinic, respectively. We used a 2012 licensure data set to determine the number of physicians, percentage of specialists, mean physician age, and percentage of female physicians at each clinic. We determined federally qualified health center (FQHC) status and affiliation with a critical access hospital (CAH) through Web searches.

Estimation

We estimated separate urban and rural models to account for location-driven differences in competition. In the first stage, we estimated reporting status using a probit regression. Our data suggest that many smaller clinics, and clinics perceived to have difficult-to-treat patient populations, were more likely to have delayed reporting until the mandate. Therefore, we interacted the 2009 performance year indicator with stand-alone clinic status, number of physicians, FQHC status, CAH affiliation, and the proportion of the potential patient population on public assistance. Because the mandate had a large influence on reporting decisions, we present average marginal effects on the probability of reporting over 3 periods: premandate (2007-2008), first mandate year (2009), and post mandate (2010-2013). In the second stage, we employed a fixed-effects model to estimate clinic performance that includes the inverse Mills ratio, obtained from the first-stage estimation. Because the fixed-effects model required 2 observations per clinic, we excluded any clinic that reported in only 1 year, including all clinics that began reporting with 2013 performance, resulting in an estimation sample of 288 urban and 244 rural clinics. We present average marginal effects of each explanatory variable on clinic performance.

RESULTS

Descriptive Statistics

Of 654 clinics providing diabetes care, 572 (87.5%) reported at least once between performance years 2006 and 2013. Urban clinics were more likely to be early reporters than rural clinics (Figure): For 2006, 32.1% of urban clinics and 14.8% of rural clinics reported their performance. The number of clinics reporting increased through performance year 2009, the year reporting became mandated, and then leveled off. By 2013, approximately 80% of clinics reported. Those not reporting were smaller, independent clinics that often had a higher percentage of specialists than reporting clinics (Table 1). Of clinics that never reported, more than half were stand-alone clinics.

Although a large improvement in publicly reported performance occurred between 2008 and 2010 (Figure), previous research using these data found that this increase is mainly attributable to changes in the definitions of measures for A1C, blood pressure, and daily aspirin, which made it easier for clinics to achieve the performance goal.9 Adjusting for these definition changes, performance improved modestly over the study period.9


 
Copyright AJMC 2006-2020 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up