This survey of 252 HMOs found that almost all measure their performance on multiple indicators of quality and most use these data in quality improvement activities.
: To examine the current state of quality monitoring and management activities of US health plans.
: Cross-sectional survey.
: We surveyed medical directors of 252 commercial HMOs (96% response rate) drawn from 41 nationally representative markets in the United States. We randomly sampled healthcare markets with at least 100,000 HMO enrollees. The markets in our sampling frame include an estimated 91% of US HMO enrollees and represent 78% of the metropolitan population.
: There was near-universal collection of data at the health plan level for each of the 7 outpatient measures we examined (ranging from 92.1% of health plans that collect data on hypertension control and cholesterol management (see p. 379) to 99.2% that collect data on patient satisfaction). There also was substantial data collection at the level of the individual provider or physician group (ranging from 50.4% for hypertension control to 81.4% for diabetes care); this was more common in health plans that primarily use capitation to reimburse primary care physicians. Health plans that collected data typically fed these data back to physician groups, but public reporting to enrollees was infrequent.
: Almost all health plans measured their performance on multiple indicators of quality. The majority of health plans also collected data at the level of the individual physician or group and used these data in quality improvement activities, but not in public reporting. Thus, adoption of physicianlevel performance measurement and reporting by the Centers for Medicare & Medicaid Services will likely entail a major change for individual physicians.
(Am J Manag Care. 2008;14(6):377-386)
This survey of a nationally representative sample of HMOs indicated that almost all health plans measure their performance on multiple indicators of quality.
The majority of health plans collect data at the level of the individual physician or group and use these data in quality improvement activities, but not in public reporting.
Collection and use of performance data are more prevalent among health plans that predominantly use capitation.
These findings document the substantial changes health plans have made in data collection and quality management, and foreshadow the controversy ahead as we likely move to broad-scale public reporting on individual physicians.
The spread of capitation within the managed care sector during the early 1990s intensified concerns about quality of care as a major potential problem for the US healthcare system.1-3 Until that point, the vast majority of US physicians had been compensated using fee-for-service (FFS) payments, which, if anything, were believed to lead to overprovision of services.4 Under capitation, however, physicians and other providers of care had financial incentives to provide fewer services, leading to fears that physicians would skimp on care.5 These concerns contributed to the development and promulgation of the multiple quality measures and measurement activities that are common today.
Over the past 15 years, quality monitoring has become commonplace, and there has been increased interest in harnessing the organizational capabilities of health plans and hospitals to improve the quality of care.6-8 Most organizational efforts have included systematic collection and examination of clinical data, and use a variety of strategies such as physician and/or patient education, physician- and patient-specific feedback, public release of data, and financial incentives designed to improve adherence with accepted standards of care. Managed care plans occupy a unique place within the healthcare system, affording them the opportunity to access a variety of clinical and financial data that can be used in quality improvement activities.
Despite the importance of organizational efforts to monitor and improve quality of care, few data exist that describe the actual quality management practices of health plans.9-13 The data that do exist are a decade old. In the interim, there have been tremendous advances in information systems and other electronic capabilities that can enhance the capacity of health plans to engage in medical management activities. In this study we examined the current state of quality management activities of a nationally representative sample of commercial HMOs, particularly those activities related to the collection and use of clinical data. We hypothesized that health plans that primarily use capitation rather than FFS to compensate primary care physicians (PCPs) would have more highly developed quality management programs.
We conducted a survey of a national sample of HMOs about their quality management activities (hereafter, we interchangeably use the terms “health plan” and HMO). We chose to focus on HMOs because among all types of health plans, they have the most advanced quality management infrastructure. There are currently more than 70 million enrollees in HMOs nationally.14 We estimated the prevalence of specific quality management activities using a set of representative quality indicators related to chronic disease management and prevention selected from the Health Plan Employers Data and Information Set (HEDIS). We previously reported on a subset of these data that focus on pay for performance (P4P).15
Survey Sample and Data Collection
The survey instrument elicited information related to the organizational characteristics of the health plan, its products and purchaser contracts, provider contracting, quality improvement activities, and “member-centric” health improvement efforts. Unless otherwise noted below, the frame of reference for the survey was 2005. Cognitive testing of a draft instrument was conducted with 5 health plans that were not in our sample prior to finalizing the questionnaire.
We first elicited information about the numbers of enrollees in the sampled market, the use of primary care gatekeeping, accreditation by the National Committee for Quality Assurance (NCQA), and ownership (for profit, not for profit). Next, we asked about the degree to which the plan relied on salary, capitation, and FFS payment for compensating PCP services. We were not able to distinguish between primary care payment arrangements with organizational entities such as large medical groups versus those with individual physicians, so individual physicians belonging to large medical groups might not be paid directly by capitation payments themselves in instances when the health plan used capitation payments for PCP services.
We then asked a series of questions about data collection programs at the level of the health plan as a whole, as well as at the level of physicians or groups of physicians within the plan. For these questions, we selected 7 tracer measures from HEDIS in the areas of patient satisfaction (1 measure), prevention (1 measure), mental health (1 measure), and other chronic disease management (4 measures) that we thought would be most important to an employed population. The measures were patient satisfaction, breast cancer screening, antidepressant medication management, hypertension control, use of appropriate medications for asthma, comprehensive diabetes care (including assessment and control of glycosylated hemoglobin, cholesterol assessment and control, retinal eye exams, and nephropathy screening), and cholesterol management after heart disease (defined as achieving a low-density lipoprotein cholesterol level <100 mg/dL). For ease of presentation, we focused our presentation of regression analyses on 4 of these measures, including 1 each in the areas of satisfaction, prevention, mental health, and other chronic disease management. The results for the other 3 quality indicators were qualitatively similar.
At the health plan level, we asked if the plan collected information on the measure, used the information obtained to target quality improvement efforts, or had demonstrated improvements in quality. At the physician/group level, we asked if the plan collected information, provided this information to the physicians/groups, used the information in a report card for enrollees, or used the information in a P4P program. We also inquired into 2 patient-specific uses of data: whether the plan sent patient-specific feedback to the responsible physician and whether the plan sent specific reminders directly to patients (eg, for a flu shot).
Finally, we also elicited information on the collection and use of hospital performance data. For these questions, we inquired into the collection of data regarding the Leapfrog standards (eg, intensive-care–unit staffing, physician order entry, procedure volume),18 National Quality Forum Safe Practices, Joint Commission on the Accreditation of Healthcare Organizations core measures,6 nurse staffing ratios, complication rates,19 risk-adjusted mortality rates,20 and patient satisfaction surveys.21 We then asked whether these data were used to define hospital tiers (eg, with varying copayment levels) or to provide increased payments.
Among 309 health plans we initially identified, 57 were found to be ineligible because they no longer offered a commercial HMO product in the relevant market (n = 36), had closed entirely (n = 11), or were duplicates (n = 10). Of the 252 eligible plans, 242 completed the survey (96% response rate). The overall item response rate was more than 90%.
Of the 242 health plans, 77 (32%) primarily compensated PCPs via some form of capitation, and the remainder used FFS (63%) or salary (5%) (). Capitated and salary plans were more commonly located in the West (53% and 42%, respectively, compared with 16% of FFS plans; overall P <.001) and less commonly located in the South (17% and 25%, respectively, compared with 42% of FFS plans; overall P = .015). The vast majority of plans were network or IPA model plans, and about two thirds were for profit. Most health plans required the use of a gatekeeper.
Although not as universal as at the health plan level, substantial data collection activities also were reported at the level of individual providers or physician group, ranging from a low of 50.4% for hypertension control to 81.4% for diabetes care. Moreover, most health plans that collected individuallevel or group-level data also fed these data back for use by the physicians or physician organizations in their quality improvement efforts. Fewer health plans used these measures in P4P programs. The most frequently used measure in P4P programs was diabetes care (used by 40.5% of plans), whereas relatively few plans used the hypertension control or antidepressant medication management measures in this way (14.5% and 17.4%, respectively). Twenty-five percent or fewer of the health plans reported data to enrollees in report cards (ranging from 13.6% for antidepressant medication management to 25.2% for diabetes care).
Collection and Use of Hospital Performance Data
Most health plans collected a variety of performance data about hospitals in their network, including 76.4% that examined data on standards promulgated by the Leapfrog group and 63.6% that examined data from the Hospital Quality Alliance (). Few health plans used these data to define hospital tiers with differential copayments or to provide extra payments to hospitals. For instance, approximately 20% of health plans used any hospital performance data to provide extra payments to hospitals and slightly fewer used these data to define hospital tiers.
Data Collection and Use in Capitated and Fee-for-Service Health Plans
At the health plan level (), there were relatively few differences in the collection and use of performance indicators between capitated and FFS health plans in both unadjusted and adjusted analyses. In general, capitated plans almost universally collected data on performance measures in each domain (100% for all measures except antidepressant medication management) and targeted these measures for improvement. Although FFS plans collected data almost as frequently, the measures were targeted for improvement less frequently. For instance, in adjusted analyses, although 98.8% of FFS plans collected satisfaction data, 90.2% targeted this measure for improvement, compared with 99.0% of capitated plans that targeted this measure for improvement (P = .03). In unadjusted analyses, capitated health plans reported more success in actually improving care. These differences were no longer significant after adjustment for other health plan characteristics.
Use of Patient-Level Data Health plans also used their data to remind patients about needed care and to provide patient-specific reminders for physicians (). Both capitated and FFS health plans commonly reported sending patient reminders, with rates of about 90% for mammography and diabetes care reminders. Physician reminders, although also common, were used somewhat more frequently by capitated health plans than by FFS health plans.
2. Hillman A,Welch W, Pauly M. Contractual arrangements between HMOs and primary care physicians: three-tiered HMOs and risk pools. Med Care Res Rev. 1992;30(2):136-148.
4. Pauly M, Eisenberg J, Radany M, Erder M, Feldman R, Schwartz J. Paying Physicians: Options for Controlling Cost, Volume, and Intensity of Services. Ann Arbor, MI: Health Administration Press; 1992.
6. US Department of Health and Human Services. Hospital Compare: a quality tool for adults, including people with Medicare. http://www.hospitalcompare.hhs.gov/. Accessed October 12, 2006.
8. National Committee for Quality Assurance (NCQA). Quality Compass.Washington, DC: NCQA; 2003.
10. Newhouse J, Buchanan J, Bailit H, et al. Managed care: an industry snapshot. Inquiry. 2002;39(3):207-220.
12. Remler D, Gray B, Newhouse J. Does managed care mean more hassle for physicians? Inquiry. 2000;37(3):304-316.
14. The Henry J. Kaiser Family Foundation. State health facts: total HMO enrollment, July 2006. http://www.statehealthfacts.org/comparemaptable.jsp?ind=348&cat=7. Accessed January 17, 2008.
16. US Census Bureau. Census 2000 summary file 1. 2000. http://www.census.gov/Press-Release/www/2001/sumfile1.html. Accessed March 7, 2008.
18. The Leapfrog Group. The Leapfrog safety practices. http://www.leapfroggroup.org/for_hospitals/leapfrog_hospital_quality_and_safety_survey_copy/leapfrog_safety_practices. Accessed January 17, 2008.
20. Werner R, Bralow E. Relationship between Medicareâ€™s Hospital Compare performance measures and mortality rates. JAMA. 2006;296(22):2694-2702.
22. Fisher E,Wennberg D, Stukel T, Gottieb D, Lucas F, Pinder E. The implications of regional variations in Medicare spending, part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273-287.
24. National Quality Forum. National Quality Forum. http://www.qualityforum.org. Accessed January 17, 2008.
26. Centers for Medicare and Medicaid Services. Medicare personal plan finder. http://www.medicare.gov/MPPF/Include/DataSection/Questions/Welcome.asp?version=default&browser=Firefox%7C1%7CW
27. Centers for Medicare and Medicaid Services. Hospital quality initiatives. http://www.cms.hhs.gov/HospitalQualityInits/. Accessed October 12, 2006.
29. Landon B, Normand S, Blumenthal D, Daley J. Physician clinical performance assessment: prospects and barriers. JAMA. 2003;290(9):1183-1189.
31. Rosenthal M, Frank R, Buchanan J, Epstein A. Scale and structure of capitated physician organizations in California. Health Aff (Millwood). 2001;20(4):109-119.