The American Journal of Managed Care January 2010
'All-or-None' (Bundled) Process and Outcome Indicators of Diabetes Care
In this study, providers were more likely to achieve processes-ofcare goals when diabetes care was bundled at the indicator level than at the patient level.
Using the indicator-level bundle, the mean rate of performance on processes of care across all programs was 77.3%, and the mean rate of performance on outcomes was 44.5% (P <.001) (Table 5 and Table 6). The patient-level bundle revealed that the mean rate of performance on processes of care across all programs was 33.5% and the mean rate of performance on outcomes was 16.2% (P <.001). Overall, the distributions for patient-level bundles were lower than those for indicator-level bundles.
Comparing the methods of bundling for processes of care revealed that the method of bundling also affected performance goals. Indicator-level processes-of-care bundle measures demonstrated that care was delivered 77.3% of the time across the population; when evaluating how many patients received indicated processes of care, this dropped to 33.5%, which was significantly lower (P = .001) (Table 5 and Table 6). A similar difference was found when comparing outcomes measures, with 44.5% of the population achieving the indicator-level bundle with controlled blood pressure, glucose, or lipid levels, but this dropped to 16.2% when evaluating the percentage of patients achieving all 3 controlled (patient-level bundle). This difference was also statistically significant (P <.001).
The concept of pay for performance has been developed to reward systems of care that achieve desired outcomes and to limit incentives to those who do not meet standards of care. Using a bundled, or all-or-none, approach demands that systems of care be developed that incorporate a team approach and goal-focused care so that optimal care is provided. Proponents of the bundling method argue that it provides an example of best practices.
However, as shown herein, the method of bundling care has significant effects on performance achievement.21-24 For example, in this study, resident physicians were more likely to achieve the goal in processes of care (low-density lipoprotein cholesterol test ordered in the past year) as opposed to outcomes (low-density lipoprotein cholesterol level <100 mg/ dL). Completing a task is often easier than completing the task successfully. Meeting an outcome measure also involves factors outside of the physician’s control such as patient genetics, patient adherence, and system factors such as access to care and formulary of medications covered to treat the disease process. The need to adjust outcomes for various patient and system factors outside of a physician’s processes-of-care control has led to risk-adjustment methods in the inpatient setting where outcomes such as mortality are investigated.25 To date, performance measurement and bundling programs have shown mixed results for improvements in diabetes outcomes.26-29
In this study, there were significant differences in performance when using different bundling methods. There was an absolute difference of 33.7% comparing the frequency of processes of care when bundled by indicator level versus patient level. In addition, there was a 28.0% difference comparing the frequency of outcomes achieved when bundled by indicat-or-level versus patient-level bundle of outcomes achieved. The implications for this difference need to be understood in the context of use of the measurement.
However, each bundling method has disadvantages. At the indicator level, physicians may be able to “score” higher but not achieve the outcomes that are most important to patients. Patients tend to care about outcomes that will affect the quantity or quality of their lives. In addition, patient-level bundling may be complicated by factors outside of the physician’s control and may inadvertently disadvantage physicians based on the patients for whom they provide care. This may lead to patient profiling and selective access to care, which may not prove to serve the public health interest. When applying these bundling methods to performance review, it may be prudent to apply indicator-level bundling to practice and to apply negative reinforcements if the indicator-level bundle is considered the minimum basic standard. Negative reinforcements could include decreased reimbursement or lower physician rating. However, the application of the more stringent patient-level bundling could be applied with positive reinforcements (increased reimbursement, bonuses, etc) that
would award those who achieve best care practices.
Limitations of this study include self-reporting of the diabetes data without an external audit. Residency programs are required to participate in this registry, but performance is not used to accredit or grade the residents or their program. Therefore, there is no reason to believe that the data are inaccurate because of that pressure. In addition, previous performance measurement programs that rely only on external data collection have proven to be problematic.30 Furthermore, the AOA-CAP database was not developed to evaluate pay-forperformance evaluation, and the program may have been developed differently if developed for this purpose. Previous research has also raised questions regarding the reliability of individual physician report cards,
especially when these report cards are reporting outcomes data that can be affected by patient factors and by issues of sufficient power to determine the difference.22,23,30,31
However, there have been some early successes in use of bundling in outpatient diabetes care. Weber et al26 used bundling of processes of care and outcomes and an electronic medical record to improve diabetes care for an entire health system within a calendar year. In that study, there was a statistically significant increase in the number of patients who reached goal A1C level and blood pressure and who had received a pneumococcal vaccine. Projects such as these are proactive and, if reproducible, could provide stimulus for greater use of bundling care to improve outcomes.
Bundling of care can be useful in clinical care and in performance measurement. When dealing with large numbers of patients or physicians, bundling can provide a summary statistic that can be used over time to track progress and to demonstrate performance improvement. This could be used by physicians to market their practice or to provide head-to-head comparison with other regional physicians.
Unfortunately, a shortcoming of bundling of care is the assumption that each component of the bundle is of equal importance. Furthermore, bundling of outcomes will require some adjustment for factors outside of a physician’s control and may penalize those physicians who serve underserved communities. Scholle et al30,31 suggest that a reliability score should be applied when using composite measures for physicians. If financial incentives are tied to the bundling process, it is critical that they are applied uniformly and are directed toward behaviors that help to improve quality of care for the individual and for the general public. Synder et al32 recommend that performance measurement should be used only when several actions have been enacted, including ensuring transparency, measuring those elements that are important to patients, and monitoring and intervening for unwanted physician behavior (such as deselection of patients or gaming the system).
In conclusion, the method of bundling in this study—whether processes of care versus outcomes or indictor level versus patient level—statistically changed performance results. In addition, this study demonstrated that the AOACAP database can be a powerful tool for quality performance programs and can assist in the bundling of performance measures. Because bundling methods will be used in the future, physicians need to address patient-level and system-level variables to make significant changes in achieving these goals. We recommend that a careful and thorough evaluation of the bundling process should be explored before these methods are implemented into the healthcare system.
Author Affiliations: From the Department of Family Medicine (JHS) and CORE Research (GDB), Ohio University, Athens, OH; Applied Outcomes (RJS), Worthington, OH; and the Department of Quality and Research (SLM), American Osteopathic Association, Chicago, IL.
Funding Source: Support of the research team was provided by the Osteopathic Heritage Foundation.
Author Disclosure: Dr Shubrook reports receiving grants from Novo Nordisk, Osteopathic Heritage Foundation, sanofi-aventis, and Takeda. Ms McGill is an employee of the American Osteopathic Association, which funds the Clinical Assessment Program. Ms McGill also reports receiving a research grant from the Osteopathic Heritage Foundation to develop the manuscript. The other authors (RJS, GDB) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (JHS, RJS); acquisition of data (JHS, RJS, SLM); analysis and interpretation of data (JHS, RJS, GDB); drafting of the manuscript (JHS, RJS, GDB); critical revision of the manuscript for important intellectual content (JHS, GDB); statistical analysis (RJS, GDB); obtaining funding (SLM); administrative, technical, or logistic support (SLM); and supervision (JHS, SLM).
Address correspondence to: Jay H. Shubrook Jr, DO, Department of Family Medicine, Ohio University, 69 Elmwood Pl, Athens, OH 45701. E-mail: firstname.lastname@example.org.
1. Centers for Disease Control and Prevention. National diabetes fact sheet, 2007. http://www.cdc.gov/diabetes/pubs/pdf/ndfs_2007.pdf. Accessed January 25, 2009.
2. American Diabetes Association. Direct and indirect costs of diabetes in the US. http://www.cdc.gov/diabetes/pubs/pdf/ndfs_2007.pdf. Accessed January 9, 2010.
3. American Diabetes Association. Executive summary: standards of medical care in diabetes: 2008. http://care.diabetesjournals.org/cgi/reprint31Supplement_1/S5. Accessed January 25, 2009.
4. National Cholesterol Education Program. Adult treatment panel III guidelines at-a-glance: quick desk reference. http://www.nhlbi.nih.gov/guidelines/cholesterol/atglance.pdf. Accessed January 25, 2009.
5. National Heart, Lung, and Blood Institute. The seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7). http://www.nhlbi.nih.gov/guidelines/hypertension/. Accessed January 25, 2009.
6. Gaede P, Vedel P, Larsen N, Jensen GV, Parving HH, Pedersen O. Multifactorial intervention and cardiovascular disease in patients with type 2 diabetes. N Engl J Med. 2003;348(5):383-393.
7. Gaede P, Lund-Andersen H, Parving HH, Pedersen O. Effect of a multifactorial intervention on mortality in type 2 diabetes. N Engl J Med. 2008;358(6):580-591.
8. Resnick HE, Foster GL, Barsley J, Ratner RE. Achievement of the American Diabetes Association clinical practice recommendations among U.S. adults with diabetes, 1999-2002: the National Health and Nutrition Examination Survey. Diabetes Care. 2006;29(3):531-537.
9. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645.
10. Grant RW, Buse JB, Meigs JB; University HealthSystem Consortium (UHC) Diabetes Benchmarking Project Team. Quality of diabetes care in U.S. academic medical centers: low rates of medical regimen change. Diabetes Care. 2005;28(2):337-442.
11. National Committee for Quality Assurance Web site. http://www.ncqa.org. Accessed May 20, 2009.
12. National Committee for Quality Assurance. Health Employer Data Information Set (HEDIS) 2008. http://www.ncqa.org/tabid/536/Default.aspx. Accessed January 25, 2009.
13. Agency for Healthcare Research and Quality. The Ambulatory Care Quality Alliance recommended starter set: clinical performance measures for ambulatory care. http://www.ahrq.gov/qual/aqastart.htm. Accessed January 25, 2009.
14. Rosenthal MB, Landon BE, Normand SL, Frank RG, Epstein AM. Pay-for-performance in commercial HMOs. N Engl J Med. 2006;355(18):1895-1902.
15. An T, Bluhm J, Foldes SS, et al. A randomized trial of a pay-for-performance program targeting clinician referral to a state tobacco quitline. Arch Intern Med. 2008;168(18):1993-1999.
16. Nolan T, Berwick DM. All-or-none measurement raises the bar on performance. JAMA. 2006;295(10):1168-1170.
17. Dudley RA. Pay-for-performance research: how to learn what clinicians and policy makers need to know. JAMA. 2005;294(14):1821-1823.
18. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356(5):486-496.
19. Centers for Medicare & Medicaid Services 8th Scope of Work. Section B: supplies or services and prices/costs. http://www.cms.gov/QualityImprovementOrgs/Downloads/8thSOW.pdf. Accessed November 12, 2009.
20. Dellinger EP, Hausmann SM, Bratzer DW, et al. Hospitals collaborate to reduce surgical site infections. Am J Surg. 2005;190(1):9-15.
21. Premier Inc. Summary of the composite quality scoring methodology. http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/resources/composite-scoring-overview.pdf. Accessed January 9, 2010.
22. Samuels TA, Bolen S, Yeh HC, et al. Missed opportunities in diabetes management: a longitudinal assessment of factors associated with sub-optimal quality. J Gen Intern Med. 2008;23(11):1770-1777.
23. Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG. The unreliability of individual physician “report cards” for assessing the costs and quality of care of a chronic disease. JAMA. 1999;281(22):2098-2105.
24. Greenfield S, Kaplan SH, Kahn R, Ninomiya J, Griffith JL. Profiling care provided by different groups of physicians: effects of patient case-mix (bias) and physician-level clustering on quality assessment results. Ann Intern Med. 2002;136(2):111-121.
25. Centers for Medicare & Medicaid Services. Medicare managed care risk adjustment method announced. http://www.cms.hhs.gov/apps/media/press/testimony.asp?Counter=100. Accessed May 25, 2009.
26. Weber V, Bloom F, Pierdon S, Wood C. Employing the electronic health record to improve diabetes care: a multifaceted intervention in an integrated delivery system. J Gen Intern Med. 2008;23(4):379-382.
27. Petitti DB, Contreras R, Ziel FH, Dudl J, Domurat ES, Hyatt JA. Evaluation of the effect of performance monitoring and feedback on care process, utilization, and outcome. Diabetes Care. 2000;23(2):192-196.
28. Mangione CM, Gerzoff RB, Williamson DF, et al; TRIAD Study Group. The association between quality of care and the intensity of diabetes disease management programs. Ann Intern Med. 2006;145(2):107-116.
29. Gray J, Millett C, Saxena S, Netuveli G, Khunti K, Majeed A. Ethnicity and quality of diabetes care in a health system with universal coverage: population-based cross-sectional survey in primary care. Gen Intern Med. 2007;22(9):1317-1320.
30. Scholle SH, Roski J, Adams JL, et al. Benchmarking physician performance: reliability of individual and composite measures. Am J Manag Care. 2008;14(12):833-838.
31. Scholle SH, Roski J, Dunn DL, et al. Availability of data for measuring physician quality performance. Am J Manag Care. 2009;15(1):67-72.
32. Synder L, Neubauer RL; American College of Physicians Ethics, Professionalism and Human Rights Committee. Pay-for-performance principles that promote patient-centered care: an ethics manifesto. Ann Intern Med. 2007;147(11):792-794.