AJMC

Comparing Quality of Care in the Medicare Program

Published Online: November 19, 2010
Niall Brennan, MPP; and Mark Shepard, BA

Objective: To compare the clinical quality of care between Medicare fee-for-service (FFS) and Medicare Advantage (MA) programs.

 

Methods: We compared 11 Healthcare Effectiveness Data and Information Set (HEDIS) quality measures nationwide for MA managed care plans and the FFS program in 2006 and 2007. We adjusted FFS measures to match the geographic distribution of MA.

 

Results: Medicare Advantage plans scored substantially better (4-16 percentage points; median, 7.8 percentage points) on 8 measures, slightly better (1.5 percentage points) on 1 measure, and worse than FFS (2-5 percentage points; median, 4.1 percentage points) on 2 measures. The 8 measures on which MA scored substantially better were well established in the HEDIS measure set (introduced in the 1990s), whereas the other 3 were all newer (introduced in 2004-2005 data). Data and program differences complicated the comparison, but it is unlikely that they were large enough to explain the sizable MA-FFS gaps observed.

 

Conclusions: Quality measures showed large, though mixed, differences between MA and FFS. The dichotomy between older and newer measures in MA suggests a learning effect, with plans improving measurement and quality over time as measures become more familiar.

(Am J Manag Care. 2010;16(11):841-848)

Despite a growing focus on measuring and reporting quality of care in Medicare to allow beneficiaries to make informed choices of providers and plans, little published information compares quality of care in traditional fee-for-service (FFS) Medicare and Medicare’s private insurance option, Medicare Advantage (MA). By contrast, substantial resources exist for comparing quality among MA plans, which are presented prominently on Medicare’s Web site in the same area used by beneficiaries to select a plan.1 Due in part to this lack of data, debate among policy makers on the relative merits of MA and FFS has focused on payment rates rather than quality of care.2-4

Efforts to compare quality between MA and FFS have become a policy priority. After recommending MA-FFS comparisons for many years, the Medicare Payment Advisory Commission (MedPAC) issued detailed recommendations in March 2010 on methods for carrying out these comparisons, including an approach similar to the one we take.5 The Medicare Improvements for Patients and Providers Act of 2008 specifies that MA-FFS comparisons begin by March 2011, underlining the importance of pursuing currently feasible strategies.

We analyzed data on quality in FFS and MA programs during 2006-2007 using 11 measures of underuse of effective care from the Healthcare Effectiveness Data and Information Set (HEDIS). The HEDIS measures are reported annually for MA and commercial plans and form the basis of nationally recognized commercial plan rankings6 and quality ratings used to inform Medicare beneficiaries.1 By contrast, HEDIS measures have not previously been available for the FFS population but were calculated for 2006-2007 for a special project by the Centers for Medicare & Medicaid Services (CMS). These data allow for one of the first national comparisons of MA and FFS on evidence-based clinical quality  measures. Previous work compared MA and FFS on patient satisfaction and quality using the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey, but measures were based on beneficiary recollection of receipt of recommended care like flu shots.7,8 Our administrative HEDIS measures complement the CAHPS comparison and  allow for comparisons of rarer conditions like depression. Past work has also compared MA and FFS quality at the state and regional level, generally finding higher quality care in MA managed care plans.9

Several issues complicated the comparison for certain measures, including variations in measure construction within the HEDIS framework, data limitations, and underlying program differences between MA and FFS. However, we argue that these data represent a valuable first step that shows how Medicare can better use existing resources to monitor FFS quality and inform beneficiaries who are choosing between MA and FFS. We also suggest ways in which future efforts could improve upon this comparison.

STUDY DATA AND METHODS

Fee-for-Service Sample

We analyzed 11 quality measures for Medicare FFS in 2006-2007 as calculated and published by CMS for the Generating Medicare Physician Quality Performance Measurement Results (GEM) project.10 This project was primarily intended to measure quality at the medical group practice level, but CMS also produced population-level measures. Our data were aggregated measures at the national, state, and zip code levels, covering all beneficiaries continuously enrolled in Medicare Parts A and B, and for some measures Part D, during the measurement years. Measures were constructed using CMS’s Parts A, B, and D claims databases.

The measures were constructed by CMS to conform to HEDIS specifications that require only administrative claims data to calculate. Data limitations necessitated a few minor modifications. One was a shorter look-back period for denominator exclusions because CMS analyzed data only for 2005-2007. Another was that for beneficiaries not enrolled in Part D, diabetes could be identified only from diagnoses in encounter data, not from use of diabetes medication.

HEDIS requires pharmacy claims data for 5 of the measures (Table 1), which are available only for the approximately 50% of FFS beneficiaries enrolled in stand-alone Part D plans. For these measures, the FFS data apply only to the population enrolled in Parts A, B, and D. Although this population differs from the population enrolled in Parts A and B, which is used for the other measures,11 the MA-FFS comparison is still of interest.

Beyond the MA-FFS comparison, these data present a snapshot of the national quality of care in FFS, updating results for other quality measures earlier in the decade.12,13

Medicare Advantage Sample

We compared these FFS data with concurrent HEDIS measures publicly reported by MA plans and audited by the National Committee for Quality Assurance (NCQA).14 Because private FFS plans were exempt from quality reporting requirements at that time, we excluded them and limited our analysis to managed care plans including HMOs, point-of-service plans, and preferred provider organizations (PPOs). In addition, we excluded MA plans centered outside of the 50 states plus the District of Columbia. The final data included plans with total enrollment of approximately 6.0 million in 2006 and 6.5 million in 2007.

Quality Measures

All of the data are process measure rates defined according to HEDIS specifications. These were constructed by using claims data to identify the subset of enrollees (called the denominator or eligible population) for whom a treatment or screening was clinically recommended. The measure rate was the fraction of this denominator population who received the recommended care in accordance with the measure definition.15 We studied 11 of the 12 HEDIS measures analyzed by the GEM project (Table 1), excluding only colon cancer screening because of an insufficient look-back period. HEDIS specifications allow a colonoscopy to have been performed in the past 9 years and a flexible sigmoidoscopy or double contrast barium enema to have been performed in the past 4 years. But the GEM study only analyzed Medicare claims over a 3-year period from 2005-2007.

To analyze nationwide quality in each program, we summed numerators and denominators across plans (MA) or states (FFS), producing a national rate for each measure, following HEDIS 2008 Technical Specifications.15 (We used this formula to determine the measure denominator.) This method differs from NCQA’s practice of taking raw averages across plan scores (irrespective of plan size), but it produces a more accurate national picture of quality for the averagebeneficiary.

Administrative and Hybrid Measures

There is an important variation in the construction of 6 of the 11 measures (see Table 1) arising from the different ways FFS Medicare and MA plans operate. For these 6 measures, HEDIS allows (but does not require) plans to calculate measure rates on a random sample of the denominator population, using medical chart review to determine whether this sample received appropriate care—a proce-dure called the hybrid method. Because their claims data often are incomplete, HMOs and point-of-service plans typically use the hybrid method, which significantly boosts their quality scores above the administrative-only calculation.16 By contrast, PPOs (as well as FFS in the GEM study) typically lack the requisite medical chart data, so NCQA requires them to follow the administrative-only specification. (This requirment has been removed starting with HEDIS 2010.17)

Although this methodologic difference makes sense in the context of plans’ data and reimbursement practices, it could bias our FFS rates downward if FFS-reimbursed physicians fail to submit claims for all procedures or omit important diagnosis codes. (Upward bias also is possible if FFS-reimbursed physicians submit claims for procedures not actually performed.) To address this issue, we observed whether the 6 hybrid measures showed different trends than the 5 administrative-only measures, which are constructed identically in MA and FFS. We also compared rates for FFS and MA PPOs, neither of which uses the hybrid method.

Geographic Adjustment

Differences between national MA and FFS quality measures are partly due to MA-FFS differences within the same areas and partly due to their different distributions of beneficiaries across areas. Assuming geographic enrollment differences are primarily driven by factors unrelated to quality, it is important to control for geographic variation to isolate the “within-area” quality difference. We did this in 2 ways.

First, we weighted the state-level FFS rates to match the distribution of the MA measure’s denominator population across states. (The MA data are at the plan level, but almost all MA managed care plans are heavily concentrated [>95%] in 1 state, making it possible to allocate each plan’s denominator to a single state. For plans with enrollment in more than 1 state, we allocated each measure’s denominator across states using the enrollment distribution, which is available at the plan-state level.) The adjusted MA-FFS difference is equal to a weighted average of the 51 within-state quality differences. This approach controls for state-level differences but misses intrastate variation (eg, between urban and rural areas).

PDF is available on the last page.

Issue: November 2010
More on AJMC.COM