Currently Viewing:
The American Journal of Managed Care November 2010
Bending the Curve Through Health Reform Implementation
Joseph Antos, PhD; John Bertko; Michael Chernew, PhD; David Cutler, PhD; Francois de Brantes; Dana Goldman, PhD; Bob Kocher, MD; Mark McClellan, MD, PhD; Elizabeth McGlynn, PhD; Mark Pauly, PhD; Leonard Schaeffer; and Stephen Shortell, PhD
Medication Use, Emergency Hospital Care Utilization, and Quality-of-Life Outcome Disparities by Race/Ethnicity Among Adults With Asthma
David M. Mosen, PhD, MPH; Michael Schatz, MD, MS; Rachel Gold, PhD, MPH; Richard A. Mularski, MD, MSHS, MCR; Winston F. Wong, MD; and Jim Bellows, PhD
Use of Well-Child Visits in High-Deductible Health Plans
Alison A. Galbraith, MD, MPH; Dennis Ross-Degnan, ScD; Stephen B. Soumerai, ScD; Allyson M. Abrams, MS; Kenneth Kleinman, ScD; Meredith B. Rosenthal, PhD; J. Frank Wharam, MB, BCh, BAO, MPH; Alyce S. Adams, PhD; Irina Miroshnik, MS; and Tracy A. Lieu, MD, MPH
Currently Reading
Comparing Quality of Care in the Medicare Program
Niall Brennan, MPP; and Mark Shepard, BA
Analysis of Venous Thromboprophylaxis Duration and Outcomes in Orthopedic Patients
Philip S. Wells, MD; Bijan J. Borah, PhD; Nishan Sengupta, PhD; Dylan Supina, PhD; Heather P. McDonald, MSc; and Louis M. Kwong, MD
Dental Capitation Insurance Provider Compensation: A Fair Deal?
Stephen K. Rhodes, DMD
Persistent Asthma Defined Using HEDIS Versus Survey Criteria
Michael Schatz, MD, MS; Robert S. Zeiger, MD, PhD; Su-Jau T. Yang, PhD; Wansu Chen, MS; William W. Crawford, MD; Shiva G. Sajjan, PhD; and Felicia Allen-Ramey, PhD
Computerized Alert Reduced D-Dimer Testing in the Elderly
Ted E. Palen, PhD, MD, MPSH; David W. Price, MD; Aaron J. Snyder, MD; and Susan M. Shetterly, MS

Comparing Quality of Care in the Medicare Program

Niall Brennan, MPP; and Mark Shepard, BA

Quality measures showed large, though mixed, differences between Medicare fee-for-service and Medicare Advantage programs.

Objective: To compare the clinical quality of care between Medicare fee-for-service (FFS) and Medicare Advantage (MA) programs.


Methods: We compared 11 Healthcare Effectiveness Data and Information Set (HEDIS) quality measures nationwide for MA managed care plans and the FFS program in 2006 and 2007. We adjusted FFS measures to match the geographic distribution of MA.


Results: Medicare Advantage plans scored substantially better (4-16 percentage points; median, 7.8 percentage points) on 8 measures, slightly better (1.5 percentage points) on 1 measure, and worse than FFS (2-5 percentage points; median, 4.1 percentage points) on 2 measures. The 8 measures on which MA scored substantially better were well established in the HEDIS measure set (introduced in the 1990s), whereas the other 3 were all newer (introduced in 2004-2005 data). Data and program differences complicated the comparison, but it is unlikely that they were large enough to explain the sizable MA-FFS gaps observed.


Conclusions: Quality measures showed large, though mixed, differences between MA and FFS. The dichotomy between older and newer measures in MA suggests a learning effect, with plans improving measurement and quality over time as measures become more familiar.

(Am J Manag Care. 2010;16(11):841-848)

This study compared quality in traditional fee-for-service (FFS) Medicare and Medicare Advantage (MA) programs for 2006-2007.

  • Relative performance on 11 clinical quality measures showed notable differences between FFS and MA, with neither program performing better on all measures.


  • MA-FFS quality comparisons should be used to inform policy makers who set program provisions and beneficiaries choosing between traditional FFS Medicare and an MA plan.
Despite a growing focus on measuring and reporting quality of care in Medicare to allow beneficiaries to make informed choices of providers and plans, little published information compares quality of care in traditional fee-for-service (FFS) Medicare and Medicare’s private insurance option, Medicare Advantage (MA). By contrast, substantial resources exist for comparing quality among MA plans, which are presented prominently on Medicare’s Web site in the same area used by beneficiaries to select a plan.1 Due in part to this lack of data, debate among policy makers on the relative merits of MA and FFS has focused on payment rates rather than quality of care.2-4

Efforts to compare quality between MA and FFS have become a policy priority. After recommending MA-FFS comparisons for many years, the Medicare Payment Advisory Commission (MedPAC) issued detailed recommendations in March 2010 on methods for carrying out these comparisons, including an approach similar to the one we take.5 The Medicare Improvements for Patients and Providers Act of 2008 specifies that MA-FFS comparisons begin by March 2011, underlining the importance of pursuing currently feasible strategies.

We analyzed data on quality in FFS and MA programs during 2006-2007 using 11 measures of underuse of effective care from the Healthcare Effectiveness Data and Information Set (HEDIS). The HEDIS measures are reported annually for MA and commercial plans and form the basis of nationally recognized commercial plan rankings6 and quality ratings used to inform Medicare beneficiaries.1 By contrast, HEDIS measures have not previously been available for the FFS population but were calculated for 2006-2007 for a special project by the Centers for Medicare & Medicaid Services (CMS). These data allow for one of the first national comparisons of MA and FFS on evidence-based clinical quality  measures. Previous work compared MA and FFS on patient satisfaction and quality using the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey, but measures were based on beneficiary recollection of receipt of recommended care like flu shots.7,8 Our administrative HEDIS measures complement the CAHPS comparison and  allow for comparisons of rarer conditions like depression. Past work has also compared MA and FFS quality at the state and regional level, generally finding higher quality care in MA managed care plans.9

Several issues complicated the comparison for certain measures, including variations in measure construction within the HEDIS framework, data limitations, and underlying program differences between MA and FFS. However, we argue that these data represent a valuable first step that shows how Medicare can better use existing resources to monitor FFS quality and inform beneficiaries who are choosing between MA and FFS. We also suggest ways in which future efforts could improve upon this comparison.


Fee-for-Service Sample

We analyzed 11 quality measures for Medicare FFS in 2006-2007 as calculated and published by CMS for the Generating Medicare Physician Quality Performance Measurement Results (GEM) project.10 This project was primarily intended to measure quality at the medical group practice level, but CMS also produced population-level measures. Our data were aggregated measures at the national, state, and zip code levels, covering all beneficiaries continuously enrolled in Medicare Parts A and B, and for some measures Part D, during the measurement years. Measures were constructed using CMS’s Parts A, B, and D claims databases.

The measures were constructed by CMS to conform to HEDIS specifications that require only administrative claims data to calculate. Data limitations necessitated a few minor modifications. One was a shorter look-back period for denominator exclusions because CMS analyzed data only for 2005-2007. Another was that for beneficiaries not enrolled in Part D, diabetes could be identified only from diagnoses in encounter data, not from use of diabetes medication.

HEDIS requires pharmacy claims data for 5 of the measures (Table 1), which are available only for the approximately 50% of FFS beneficiaries enrolled in stand-alone Part D plans. For these measures, the FFS data apply only to the population enrolled in Parts A, B, and D. Although this population differs from the population enrolled in Parts A and B, which is used for the other measures,11 the MA-FFS comparison is still of interest.

Beyond the MA-FFS comparison, these data present a snapshot of the national quality of care in FFS, updating results for other quality measures earlier in the decade.12,13

Medicare Advantage Sample

We compared these FFS data with concurrent HEDIS measures publicly reported by MA plans and audited by the National Committee for Quality Assurance (NCQA).14 Because private FFS plans were exempt from quality reporting requirements at that time, we excluded them and limited our analysis to managed care plans including HMOs, point-of-service plans, and preferred provider organizations (PPOs). In addition, we excluded MA plans centered outside of the 50 states plus the District of Columbia. The final data included plans with total enrollment of approximately 6.0 million in 2006 and 6.5 million in 2007.

Quality Measures

All of the data are process measure rates defined according to HEDIS specifications. These were constructed by using claims data to identify the subset of enrollees (called the denominator or eligible population) for whom a treatment or screening was clinically recommended. The measure rate was the fraction of this denominator population who received the recommended care in accordance with the measure definition.15 We studied 11 of the 12 HEDIS measures analyzed by the GEM project (Table 1), excluding only colon cancer screening because of an insufficient look-back period. HEDIS specifications allow a colonoscopy to have been performed in the past 9 years and a flexible sigmoidoscopy or double contrast barium enema to have been performed in the past 4 years. But the GEM study only analyzed Medicare claims over a 3-year period from 2005-2007.

To analyze nationwide quality in each program, we summed numerators and denominators across plans (MA) or states (FFS), producing a national rate for each measure, following HEDIS 2008 Technical Specifications.15 (We used this formula to determine the measure denominator.) This method differs from NCQA’s practice of taking raw averages across plan scores (irrespective of plan size), but it produces a more accurate national picture of quality for the averagebeneficiary.

Administrative and Hybrid Measures

There is an important variation in the construction of 6 of the 11 measures (see Table 1) arising from the different ways FFS Medicare and MA plans operate. For these 6 measures, HEDIS allows (but does not require) plans to calculate measure rates on a random sample of the denominator population, using medical chart review to determine whether this sample received appropriate care—a proce-dure called the hybrid method. Because their claims data often are incomplete, HMOs and point-of-service plans typically use the hybrid method, which significantly boosts their quality scores above the administrative-only calculation.16 By contrast, PPOs (as well as FFS in the GEM study) typically lack the requisite medical chart data, so NCQA requires them to follow the administrative-only specification. (This requirment has been removed starting with HEDIS 2010.17)

Although this methodologic difference makes sense in the context of plans’ data and reimbursement practices, it could bias our FFS rates downward if FFS-reimbursed physicians fail to submit claims for all procedures or omit important diagnosis codes. (Upward bias also is possible if FFS-reimbursed physicians submit claims for procedures not actually performed.) To address this issue, we observed whether the 6 hybrid measures showed different trends than the 5 administrative-only measures, which are constructed identically in MA and FFS. We also compared rates for FFS and MA PPOs, neither of which uses the hybrid method.

Geographic Adjustment

Differences between national MA and FFS quality measures are partly due to MA-FFS differences within the same areas and partly due to their different distributions of beneficiaries across areas. Assuming geographic enrollment differences are primarily driven by factors unrelated to quality, it is important to control for geographic variation to isolate the “within-area” quality difference. We did this in 2 ways.

First, we weighted the state-level FFS rates to match the distribution of the MA measure’s denominator population across states. (The MA data are at the plan level, but almost all MA managed care plans are heavily concentrated [>95%] in 1 state, making it possible to allocate each plan’s denominator to a single state. For plans with enrollment in more than 1 state, we allocated each measure’s denominator across states using the enrollment distribution, which is available at the plan-state level.) The adjusted MA-FFS difference is equal to a weighted average of the 51 within-state quality differences. This approach controls for state-level differences but misses intrastate variation (eg, between urban and rural areas).

Second, we preliminarily controlled for substate geography by weighting the FFS measure denominator populations to match the county-level distribution of MA enrollees. Although adjusting at a smaller geographic level is preferable, this adjustment has 2 limitations. First, the distribution of MA enrollment may differ from the distribution in each measure’s denominator population (although the 2 distributions should be correlated), but the latter was not available at the county level. Second, county-level adjustment was not feasible for 4 measures, for which most zip code–level FFS rates have been suppressed because they were based on fewer than 11 beneficiaries. Because of these limitations, we report both the state-level and county-level geographic adjustments.

Sociodemographic Differences

Traditionally—for instance, in NCQA publications18 and in MA quality ratings presented to Medicare beneficiaries1— HEDIS process measures have not been case mix adjusted because they apply to a clinically similar denominator population. However, the different characteristics of MA and FFS enrollees may raise concerns. Because we did not have quality measures stratified by demographics, it was impossible to adjust for case mix. Instead, we used enrollment-level differences as a proxy to assess the potential magnitude of demographic differences.


Sample Characteristics

Table 2 shows the demographics of enrollees in FFS and in MA plans included in our sample. Fee-for-service enrolls slightly more males and significantly more under age 65 years disabled and people dually eligible for Medicare and Medicaid. (We defined dual eligibility broadly to include any beneficiary whose Part B premium is paid by a state Medicaid program.) Medicare enrollees are more concentrated in metropolitan areas and in the Pacific and Middle Atlantic census divisions.

Copyright AJMC 2006-2020 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
Welcome the the new and improved, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up