Currently Viewing:
The American Journal of Managed Care February 2015
A Multidisciplinary Intervention for Reducing Readmissions Among Older Adults in a Patient-Centered Medical Home
Paul M. Stranges, PharmD; Vincent D. Marshall, MS; Paul C. Walker, PharmD; Karen E. Hall, MD, PhD; Diane K. Griffith, LMSW, ACSW; and Tami Remington, PharmD
Quality’s Quarter-Century
Margaret E. O'Kane, MHA, President, National Committee for Quality Assurance
Currently Reading
How Pooling Fragmented Healthcare Encounter Data Affects Hospital Profiling
Amresh D. Hanchate, PhD; Arlene S. Ash, PhD; Ann Borzecki, MD, MPH; Hassen Abdulkerim, MS; Kelly L. Stolzmann, MS; Amy K. Rosen, PhD; Aaron S. Fink, MD; Mary Jo V. Pugh, PhD; Priti Shokeen, MS; and Michael Shwartz, PhD
Health Literacy and Cardiovascular Disease Risk Factors Among the Elderly: A Study From a Patient-Centered Medical Home
Anil Aranha, PhD; Pragnesh Patel, MD; Sidakpal Panaich, MD; and Lavoisier Cardozo, MD
Employers Should Disband Employee Weight Control Programs
Alfred Lewis, JD; Vikram Khanna, MHS; and Shana Montrose, MPH
Race/Ethnicity, Personal Health Record Access, and Quality of Care
Terhilda Garrido, MPH; Michael Kanter, MD; Di Meng, PhD; Marianne Turley, PhD; Jian Wang, MS; Valerie Sue, PhD; Luther Scott, MS
Leveraging Remote Behavioral Health Interventions to Improve Medical Outcomes and Reduce Costs
Reena L. Pande, MD, MSc; Michael Morris; Aimee Peters, LCSW; Claire M. Spettell, PhD; Richard Feifer, MD, MPH; William Gillis, PsyD
Decision Aids for Benign Prostatic Hyperplasia and Prostate Cancer
David Arterburn, MD, MPH; Robert Wellman, MS; Emily O. Westbrook, MHA; Tyler R. Ross, MA; David McCulloch, MD; Matt Handley, MD; Marc Lowe, MD; Chris Cable, MD; Steven B. Zeliadt, PhD; and Richard M. Hoffman, MD, MPH
Faster by a Power of 10: A PLAN for Accelerating National Adoption of Evidence-Based Practices
Natalie D. Erb, MPH; Maulik S. Joshi, DrPH; and Jonathan B. Perlin, MD, PhD, MSHA, FACP, FACMI
Differences in Emergency Colorectal Surgery in Medicaid and Uninsured Patients by Hospital Safety Net Status
Cathy J. Bradley, PhD; Bassam Dahman, PhD; and Lindsay M. Sabik, PhD
The Role of Behavioral Health Services in Accountable Care Organizations
Roger G. Kathol, MD; Kavita Patel, MD, MS; Lee Sacks, MD; Susan Sargent, MBA; and Stephen P. Melek, FSA, MAAA
Patients Who Self-Monitor Blood Glucose and Their Unused Testing Results
Richard W. Grant, MD, MPH; Elbert S. Huang, MD, MPH; Deborah J. Wexler, MD, MSc; Neda Laiteerapong, MD, MS; E. Margaret Warton, MPH; Howard H. Moffet, MPH; and Andrew J. Karter, PhD
The Use of Claims Data Algorithms to Recruit Eligible Participants Into Clinical Trials
Leonardo Tamariz, MD, MPH; Ana Palacio, MD, MPH; Jennifer Denizard, RN; Yvonne Schulman, MD; and Gabriel Contreras, MD, MPH
A Systematic Review of Measurement Properties of Instruments Assessing Presenteeism
Maria B. Ospina, PhD; Liz Dennett, MLIS; Arianna Waye, PhD; Philip Jacobs, DPhil; and Angus H. Thompson, PhD
Emergency Department Use: A Reflection of Poor Primary Care Access?
Daniel Weisz, MD, MPA; Michael K. Gusmano, PhD; Grace Wong, MBA, MPH; and John Trombley II, MPP

How Pooling Fragmented Healthcare Encounter Data Affects Hospital Profiling

Amresh D. Hanchate, PhD; Arlene S. Ash, PhD; Ann Borzecki, MD, MPH; Hassen Abdulkerim, MS; Kelly L. Stolzmann, MS; Amy K. Rosen, PhD; Aaron S. Fink, MD; Mary Jo V. Pugh, PhD; Priti Shokeen, MS; and Michael Shwartz, PhD
Incomplete records of patient history can bias hospital profiling. Completing health records for Medicare-covered patients in VA hospitals resulted in modest changes in hospital performance.
People receiving healthcare from multiple payers (eg, Medicare and the Veterans Health Administration [VA]) have fragmented health records. How the use of more complete data affects hospital profiling has not been examined.

Study Design
Retrospective cohort study.

We examined 30-day mortality following acute myocardial infarction at 104 VA hospitals for veterans 66 years and older from 2006 through 2010 who were also Medicare beneficiaries. Using VA-only data versus combined VA/Medicare data, we calculated 2 risk-standardized mortality rates (RSMRs): 1 based on observed mortality (O/E) and the other from CMS’ Hospital Compare program, based on model-predicted mortality (P/E). We also categorized hospital outlier status based on RSMR relative to overall VA mortality: average, better than average, and worse than average. We tested whether hospitals whose patients received more of their care through Medicare would look relatively better when including those data in risk adjustment, rather than including VA data alone.

Thirty-day mortality was 14.8%. Adding Medicare data caused both RSMR measures to significantly increase in about half the hospitals and decrease in the other half. O/E RSMR increased in 53 hospitals, on average, by 2.2%, and decreased in 51 hospitals by –2.6%. P/E RSMR increased, on average, by 1.2% in 56 hospitals, and decreased in the others by –1.3%. Outlier designation changed for 4 hospitals using O/E measure, but for no hospitals using P/E measure.

VA hospitals vary in their patients’ use of Medicare-covered care and completeness of health records based on VA data alone. Using combined VA/Medicare data provides modestly different hospital profiles compared with those using VA-alone data.

Am J Manag Care. 2015;21(2):129-138
  • Evaluation of hospital performance in terms of risk-adjusted outcomes (eg, 30-day mortality rate) can be biased if patient health records are incomplete due to fragmentation of data across multiple healthcare systems.
  • For Medicare-covered veterans admitted to Veterans Health Administration (VA) hospitals for acute myocardial infarction, we found that using VA-only data substantially undercounts patient comorbidity compared with combined VA-Medicare data.
  • Using combined VA-Medicare data, instead of VA-only data, resulted in a modest change in relative hospital performance based on risk-adjusted 30-day mortality as the outcome.
Evaluating hospital performance based on patient outcomes, such as complications, readmissions, and mortality, has become a mainstay of ongoing healthcare quality improvement initiatives in the United States.1-3 Hospital performance measures have traditionally been used for monitoring outcomes of high-risk patients, and more recently, for public reporting and determining financial incentives.1,4,5 The CMS Hospital Compare program evaluates nearly all US acute care hospitals based on their patients’ risk of 30-day mortality and readmission following an admission for acute myocardial infarction (AMI), heart failure, or pneumonia.6

Ideally, such evaluations would rely on detailed clinical information on patient morbidity and severity at admission. However, due to issues with completeness, access, and comparability with such data, many profiling programs, including Hospital Compare, use administrative discharge data (“billing” or “encounter” records).7-9 A potential shortcoming—the consequences of which have not been previously explored—is that some individuals receive substantial care in multiple systems, making the data in any single system incomplete.10,11 Dual or triple eligibility for Medicare, Medicaid, and Veterans Health Administration (VA) healthcare; changes in Medicaid coverage; and switches into and among private managed care plans are common sources of “fragmentation” of patient data into payer-specific silos.12-15 This problem of incompleteness could also arise when hospital profiling is based only on records of patient care obtained at that hospital. The 2 most common current strategies are: 1) to use the data at hand, ignoring its incompleteness; and 2) to exclude patients with dual coverage. Neither is ideal.16,17

When patients use different systems for distinct medical problems, single data source assessments can miss important differences in patient risk that could bias performance measures; in this case, pooled data should add consequential new clinical information.10,11 If hospitals vary substantially in how much of their patients’ data is unobserved, single-source hospital profiles could disadvantage facilities whose patients’ data are particularly incomplete.

To explore this, we examined hospital profiling in the VA—the largest integrated healthcare provider system in the United States—which offers comprehensive healthcare to 7.9 million enrollees (2010).18 The VA’s highly integrated, comprehensive, and systematized healthcare information system has been used extensively to evaluate patient outcomes and hospital performance.17,19 However, VA data can still be missing important diagnoses since 77% of veterans have dual or multiple coverage, involving Medicare (51%), Medicaid (7%), TRICARE (16%), or commercial insurance (29%).12-14,18,20-22 How might evaluations of VA hospital performance based on only VA data differ from those based on more complete data? We compared VA only–based profiles with VA-Medicare-based profiles for veterans 66 years and older who are receiving Medicare’s Fee for Service (FFS) benefit; this cohort receives the vast majority of its healthcare within these 2 systems.18 We selected 30-day mortality for patients admitted for AMI to evaluate VA hospitals, as it remains a key performance criterion for stakeholder groups nationally.6,23-25 We examined the effects under 2 population evaluation methods, including CMS’ Hospital Compare, to facilitate comparability and enhance relevance.


Study data for 2006 to 2010 were obtained from the VA and Medicare administrative inpatient and outpatient files. We used VA patient treatment files; outpatient clinical files; and vital status files; and Medicare FFS beneficiary, inpatient/skilled nursing facility, carrier, and outpatient files. Our goal was to compare VA hospitals on their AMI admissions (found in the VA inpatient file), based on their risk-adjusted 30-day survival. Because “risk” is calculated from diagnoses recorded on claims incurred during the 365 days prior to admission, we obtained Medicare FFS data during this period (to add to VA data) to create “complete” risk profiles. We applied the protocol adopted by CMS Hospital Compare and endorsed by the National Quality Forum to identify the study cohort and calculate risk.6

Study Cohort

Using VA acute inpatient discharge data for fiscal years 2006 to 2010, we identified all admissions, henceforth termed “index admissions,” for patients 66 years or older with the principal diagnosis of AMI (all International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM] 410.xx codes, except 410.x2).6 We only retained hospitalizations for veterans who were both continuously enrolled in Medicare FFS and never enrolled in a VA hospice program during the 12 months preceding the admission date. We additionally excluded admissions that: a) were transfers from another acute care hospital; b) resulted in discharges against medical advice or discharges alive on the same or next day following admission; or c) had missing data on key measures. For patients with multiple, otherwise eligible admissions in a year, we randomly selected 1 for our study sample so as to avoid survival bias.6 Following VA Hospital Compare methodology, we excluded admissions from hospitals with fewer than 25 otherwise eligible admissions during the 5-year study period.

Patient Outcome and Risk Factors

We examined patient death within 30 days of the index admission date. Risk factors consisted of demographics and comorbid conditions identified using ICD-9-CM diagnosis codes in the index admission record and in records of inpatient discharges and outpatient visits during the 12 months preceding the index admission date.6 Following CMS Hospital Compare protocols, we excluded selected secondary diagnosis codes in the index record identified as potential complications of the admission itself.6 The diagnosis codes were then classified using DxCG condition categories.26,27 To examine the impact of combining patient data from Medicare records, we produced 2 sets of risk factors: 1 based on VA data alone and the other based on combined VA and Medicare data.

Medicare Utilization Measures

To quantify patient- and hospital-level differences in Medicare utilization, we identified all acute inpatient care and outpatient visits in the 12 months preceding the index admission, separately in VA and Medicare data. For each index admission, we defined 2 measures: 1) a categorical grouping of the relative volume of Medicare use: a) no Medicare-covered inpatient or outpatient care (“none”), b) at least some, but less than 25% of outpatient visits covered by Medicare (“moderate”), and c) at least 25% of outpatient visits covered by Medicare (“high”); and 2) the proportion (%) of a patient’s outpatient visits that had been covered by Medicare. Using the latter measure, we grouped hospitals into tertiles by increasing the proportion of Medicare-covered outpatient use.

Risk Adjustment Models

To obtain the weight associated with each risk factor and the adjusted discharge-level predicted probability of 30-day mortality, we estimated: a) a logistic regression model (GLM); and b) a hierarchical logistic regression model (HLM) wherein the log-odds of the dichotomous outcome of 30-day mortality was specified as a linear function of patient risk factors—HLM also included unobserved hospital effect.4,28,29 We reported odds ratios (ORs) associated with each risk factor. For both models, overall model fit was evaluated by the area under the receiver operating characteristic curve (C statistic), the percentage of outcome variation explained, and the observed outcome rate in the lowest and highest deciles of predicted probability of death.30 We calculated 2 predicted numbers of deaths for each hospital: the first, denoted E, is an expected number of deaths at that hospital, assuming average VA care, but adjusted for its patients’ risk (from GLM and HLM models); and the second, denoted P, is an expected number of deaths in that hospital that accounts for both patient risk and hospital effect (from the HLM model).30

As a potential measure of the incremental patient risk captured in Medicare data, we estimated a separate hierarchical logistic regression model in which we added a patient discharge-level categorical indicator of Medicare use as a covariate.

Risk-Standardized Mortality Rates (RSMRs)

Hospital performance was expressed in terms of risk standardized mortality rates (RSMRs).28,30 Two measures of RSMR are commonly used for hospital profiling: a) the traditional measure, based on the ratio of observed number of hospital deaths (O) to E28,30; and b) the Hospital Compare measure, based on the ratio of P to E.28,29 Each ratio is multiplied by the overall observed death rate across all hospitals to obtain O/E RSMR and P/E RSMR estimates, respectively. Since the 2 RSMR measures behave differently in profiling, primarily because the P/E ratio “shrinks” estimates toward an overall average, we separately examined the impact of adding Medicare data when profiling hospitals using each measure.28

Analysis of Impact of Pooling Medicare Data

The direct impact of adding Medicare data is through finding risk factors not documented in VA records. Accordingly, we first calculated the prevalence of individual risk factors in VA data alone and in combined VA/Medicare data. Changes in risk factor prevalence require recalibrating the mortality model, leading to new risk weights. We refer to the changes in E associated with changing risk weights as an indirect effect. We estimated the direct, indirect, and overall changes in RSMR from adding Medicare data. The direct effect was measured by the change in RSMRs due to the change in risk prevalence, holding risk weights unchanged, while the indirect effect was measured by the change due to changing in risk weights, keeping risk prevalence unchanged.

Importantly, when a risk prediction model is calibrated to data, the sum of the expected probabilities equals the observed mortality; therefore, the sums of the expected probabilities of mortality from a model with and without Medicare data are equal. Thus, if adding Medicare data causes the expected probabilities in some hospitals to increase, it must decrease in others. Given this, we reported overall change in RMSRs for hospitals that experience an increase or decrease in RSMR separately.

Our core measures of the impact of adding Medicare data are absolute and relative (%) overall RSMR change. We calculated 95% CIs for these, defined as the (2.5th, 97.5th) percentile range of the RSMR change, from 1000 bootstrap resamplings, with each stratified by hospital.31,32 A second outcome of interest is change in outlier status, defined by RSMR bootstrap CI, lying either entirely above (“worse-than-average”), or entirely below (“better-than-average”) the VA national average mortality rate. We examined changes in outlier status after adding Medicare data. Further, to evaluate associations with extent of Medicare utilization, we estimated RSMRs and outlierstatus changes for hospitals within tertiles of Medicare utilization.

Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up