This study examines the clinical effects of care management and quality improvement interventions implemented by physician groups on pay-for-performance success.
Objectives: To examine the association between performance on clinical process measures and intermediate outcomes and the use of chronic care management processes (CMPs), electronic medical record (EMR) capabilities, and participation in external quality improvement (QI) initiatives.
Study Design: Cross-sectional analysis of linked 2006 clinical performance scores from the Integrated Healthcare Association’s pay-for-performance program and survey data from the 2nd National Study of Physician Organizations among 108 California physician organizations (POs).
Methods: Controlling for differences in PO size, organization type (medical group or independent practice association), and Medicaid revenue, we used ordinary least squares regression analysis to examine the association between the use of CMPs, EMR capabilities, and external QI initiatives and performance on the following 3 clinical composite measures: diabetes management, processes of care, and intermediate outcomes (diabetes and cardiovascular).
Results: Greater use of CMPs was significantly associated with clinical performance: among POs using more than 5 CMPs, we observed a 3.2-point higher diabetes management score on a performance scale with scores ranging from 0 to 100 (P <.001), while for each 1.0-point increase on the CMP index, we observed a 1.0-point gain in intermediate outcomes (P <.001). Participation in external QI initiatives was positively associated with improved delivery of clinical processes of care: a 1.0-point increase on the QI index translated into a 1.4-point gain in processes-of-care performance (P = .02). No relationship was observed between EMR capabilities and performance.
Conclusion: Greater investments in CMPs and QI interventions may help POs raise clinical performance and achieve success under performance-based accountability schemes.
(Am J Manag Care. 2010;16(8):601-606)
Physician organizations are being exposed to incentives to drive improvements in quality, yet little is known about which investments may contribute to gains in clinical performance. Findings from this study suggest the following:
The lackluster performance of the US health system1,2 has led to several interventions, including provider profiling,3 transparency of performance information,4 and pay for performance (P4P),5-7 that are being implemented in an effort to close the quality gap. A key example is the Integrated Healthcare Association (IHA)’s P4P initiative, which is the largest P4P program in the United States, targeting 225 California managed care medical groups and independent practice associations (IPAs). The IHA’s P4P program uses financial incentives and public reporting of performance scores to drive improvements in clinical quality, patient experience, and adoption of information technologies.8
Although these strategies are intended to stimulate changes by physician organizations (POs), POs face substantial challenges in understanding what investments they could undertake to improve performance scores and achieve success under P4P and public accountability schemes. It is unknown whether investments in chronic care management processes (CMPs), clinical information technology, or quality improvement (QI) activities are associated with better performance. The limited empirical evidence about what factors are associated with better performance hinders providers’ investment decision making and our collective ability to close the quality gap.
To address this question, we examined the relationship between the use of organized processes to improve quality and PO clinical performance scores collected in the context of California’s P4P program. We examined (1) the use of CMPs, (2) electronic medical record (EMR) capabilities, and (3) participation in externally organized QI initiatives, hypothesizing that greater use of these processes is associated with better clinical performance.
We combined IHA clinical data with survey data from the 2nd National Study of Physician Organizations (NSPO2).9-11 Among 225 unique IHA POs, 180 reported clinical data in 2006 for patients enrolled in commercial health maintenance organization and point-of-service plan products. The IHA’s program allows POs the option of being scored using aggregated health plan encounter data or via audited PO self-reported data; self-reporting groups have higher scores than non—self-reporting groups in large measure because of greater data completeness. To eliminate possible bias due to differential reporting procedures, we used only health plan aggregated data for 14 clinical measures (PO-level numerators and denominators) that existed for all 180 POs.
Control variables included organization type (medical group or IPA), PO size, and percentage of revenue from Medicaid (details are provided in a Technical eAppendix available at www.ajmc.com). We hypothesize that (1) medical groups will perform better than IPAs because of greater integration of CMPs, (2) larger organizations will outperform smaller organizations owing to economies of scale and availability of resources to support QI, and (3) greater Medicaid revenue will be negatively related to performance because of the effect of lower reimbursement, which hinders investments in organized care processes, and challenges in caring for low-income or disabled patients.
Data on organization characteristics and QI processes were collected in the NSPO2 telephone survey conducted between March 2006 and March 2007 among all US POs that had 20 or more physicians and that treated patients with asthma, diabetes, congestive heart failure, or depression. Of 218 California POs with usable contact information, 20 were classified as ineligible (was out of business, did not treat 1 of the target diseases, or had no contact information). The survey response rate for the remaining 198 POs was 64.1%. We matched 127 California NSPO2 respondents with 180 POs having IHA clinical scores, resulting in a final sample of 108 POs. We found no significant differences between these 108 POs and 72 POs with IHA clinical data and without NSPO2 data in terms of medical group or IPA structure and the number of physicians in the group; however, the 108 POs were more likely to have greater enrollment, have fewer Medicaid enrollees, and be located in northern California. RAND Corporation’s Human Subjects Protection Committee approved the study.
Dependent Variables. We constructed 3 dependent variables (diabetes management, processes of care, and intermediate outcomes) using an opportunities compositing approach, summing all instances in which recommended care was delivered or an outcome was achieved and dividing by the number of times that patients within a PO were eligible for relevant indicators in each composite measure.1,12 The mean scores range from 0 to 1 and reflect the percentage of opportunities that were delivered. The individual measures were constructed using National Committee for Quality Assurance Health Employer Data and Information Set specifications.13 The diabetes management composite reflected screening and control measures, the intermediate outcomes composite incorporated outcomes for coronary artery disease and diabetes, and the processes-of-care composite included 11 clinical measures (preventive screens, immunizations, asthma maintenance, and upper respiratory tract infection).
Independent Variables. We included the use of CMPs, EMR capabilities, and participation in external QI initiatives as index variables. The CMP measures, based on the chronic care model by Wagner et al,14,15 captured whether the PO used or provided the following: (1) electronic registries or patient lists, (2) guideline-based reminders, (3) feedback data to physicians, (4) patient reminders for follow-up care, (5) nonphysician staff to educate patients about CMPs, and (6) nurse case managers. A process-specific measure was constructed for each of these 6 CMPs (based on whether the PO used or provided the CMPs for diabetes, asthma, and congestive heart failure), resulting in a score ranging from 0 to 3 for each CMP. Construction of the CMP index was tailored to each dependent variable, with a diabetes-specific index ranging from 0 to 6 for the diabetes management composite, an index ranging from 0 to 18 for the processes-of-care composite, and an index ranging from 0 to 6 for the intermediate outcomes composite.
Electronic medical record capabilities were measured on an index ranging from 0 to 5 based on EMR components used by most PO’s physicians such as ambulatory care progress notes or electronic prescribing.2,16-19 Participation in external QI initiatives was measured using an index ranging from 0 to 2 based on whether the PO participated in QI demonstration programs (eg, Bridges to Excellence,20 Institute for Healthcare Improvement Quality Collaboratives,21 Pursuing Perfection,22 Improving Chronic Illness Care,23 rapid-cycle QI activities24).
The model controlled for differences in PO size, organization type (medical group or IPA), and percentage of revenue from Medicaid. Because of the skewed nature of PO size, this was specified as the natural logarithm of the number of physicians in the practice. Table 1 gives descriptive statistics of the independent and dependent variables.
We ran a multivariate ordinary least squares regression model weighted for different sample sizes of the denominator in the “opportunities” composite dependent variable to address heteroskedasticity that results from using aggregated data. Because the dependent variables are PO means, the equal variance assumption of the regression is violated if the sample sizes across POs are different, which could lead to the treatment of smaller POs as more important than they really are without the correction. We ran a Pearson product moment correlation matrix using a 2-tailed significance test to assess the degree of collinearity among the variables in the model. Weak associations were found between our independent variables (<±0.20) except for size and group structure (value, 0.42) and medical group and EMR capabilities (value, 0.38). Given the small sample, we used the dfbeta statistics in SAS (SAS 9.1; SAS Institute, Cary, NC) to test the sensitivity of our results to possible influential observations.
The signs on all coefficients were as hypothesized (Table 2), and the models achieved moderate predictive power (R2 range, 0.17-0.30). The CMP index demonstrated significant positive associations with performance on 2 of the composite measures, namely, diabetes management and intermediate outcomes. Higher performance in diabetes management (3.2 points higher on a 0-100 performance scale) was associated with substantial investments in CMPs (>5 CMPs on a 0-6 scale); each 1.0-point increase on the CMP index translated into a 1.0-point gain for the intermediate outcomes composite (P <.001). Higher engagement in external QI initiatives was significantly positively associated with the processes-of-care component; a 1.0-point increase on the QI index translated into a 1.4-point gain on the CMP index (P = .02). The results were robust to the removal of influential observations.
Among the control variables, medical group organization type was significantly associated with higher performance for 2 of the composite measures (3.0-4.6 points higher for medical groups compared with IPAs). Physician organization size was positively associated with higher performance on the processes-of-care composite (1.5 points) (P = .002). The median PO size was 168 physicians (minimum of 8 and maximum of 2207); therefore, the net effect of increasing the number of physicians within a PO from 10 to 100 physicians on the log scale would translate into a 3.5-point gain for the processes-of-care composite, with an effect size of 1.5 (Table 2). We observed no relationship between Medicaid revenue and performance.
Enhancing our understanding of the types of investments that will yield positive gains in performance is essential to enabling POs and their physicians to improve care for patients. Although causality in a cross-sectional design cannot be established, the associations we observed in this broad sample of physician group practices suggest a positive relationship between clinical performance and the use of CMPs and involvement in external QI initiatives. A positive association was observed between the use of CMPs and clinical performance (approximately 1.0-3.2 points higher) for 2 of 3 composite clinical outcomes measured in this study. We assumed that the component parts of our CMP measure are equally valued; in other words, adding a patient registry has the same positive relationship with performance as providing patient feedback. A larger study could help to discern whether the differential contributions of specific CMPs such as disease registries or performance feedback to physicians are more important for improving clinical performance or whether the broader set of CMPs is required.
To a lesser extent, we found that involvement in external QI initiatives was associated with higher performance on the processes-of-care composite; each 1.0-point increase on the QI index translated into a 1.4-point gain in the processes-of-care composite score (score range, 0-100 points) (P = .02). Greater use of EMR capabilities was not associated with improved performance possibly because for many POs, EMR capabilities remain in their early phases of adoption and were not being readily applied for QI purposes at the time of this study. Other studies22,23 have found no relationship between EMR use and CMPs.
Among the control variables, large PO size seems to confer performance advantages, which is consistent with recent research suggesting that integrated and larger POs perform better on technical quality24 and patient experience25,26 measures. Large-scale operations may signal a greater ability to bring resources to bear on QI and to invest in care management “capabilities” and systems support.
The study findings should be considered in light of several limitations. Although the relationship was tested in one of the largest measurement and improvement projects in the United States, the study focuses on the experiences of California POs. The replicability of these results may be influenced by differences in PO structures and market characteristics. In addition, the cross-sectional nature of this study precludes the ability to demonstrate a causal relationship between performance and the use of CMPs or external QI initiatives. The small sample size of POs among which to detect differences also limited the study; therefore, it will be important to replicate this work in other settings with a larger sample of provider practices. Moreover, the composite measures may act as a proxy for unmeasured attributes such as organizational culture27 of groups that choose to invest in more CMPs. Finally, it is possible that POs with particular patient mixes (eg, better educated and higher income) perform better. Because we did not have access to patient-level data, we were unable to assess this relationship.
Although the present findings suggest that the use of CMPs and, to a lesser extent, participation in external QI initiatives are associated with better clinical performance, the gains are modest. A stronger and broader set of interventions such as redesigned care structures and processes may be needed to achieve greater gains. The continuing evolution toward performance-based payment and other forms of accountability underscores the importance of physician practices developing the capabilities needed to successfully respond to the incentives.
Rodger Madison of RAND Corporation assisted with analysis of the Integrated Healthcare Association data.
Author Affiliations: From RAND Corporation (CLD, JA), Santa Monica, CA; University of California, Berkeley (SMS, KR, RRG, RKM), Berkeley, CA; University of California, San Francisco (DR), San Francisco, CA; and Department of Public Health (LPC), Weill Cornell Medical College, New York, NY.
Funding Source: This work was supported by grant 04-1109 from the California HealthCare Foundation, by grant 51573 from the Robert Wood Johnson Foundation, and by grant 20050334 from the Commonwealth Fund.
The funders had no role in the conduct of the research, analysis, or interpretation of the findings nor in the presentation of the results herein. The views presented are those of the authors and not necessarily those of the funders.
Author Disclosures: The authors (CLD, SMS, KR, RRG, DR, RKM, LPC, JA) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (CLD, SMS, KR, RRG, DR, LPC); acquisition of data (SMS, KR, RRG, DR, LPC); analysis and interpretation of data (CLD, SMS, RRG, DR, RKM, LPC, JA); drafting of the manuscript (CLD, SMS, RRG, RKM, LPC); critical revision of the manuscript for important intellectual content (CLD, SMS, KR, RRG, DR, LPC, JA); statistical analysis (CLD, RRG, RKM, JA); obtaining funding (CLD, SMS, KR, RRG, LPC); administrative, technical, or logistic support (CLD, RRG); and supervision (CLD, RRG).
Address correspondence to: Cheryl L. Damberg, PhD, RAND Corporation, 1776 Main St, PO Box 2138, Santa Monica, CA 90407-2138. E-mail: email@example.com.
1. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645.
2. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.
3. Pham HH, Ginsburg PB. Unhealthy trends: the future of physician services. Health Aff (Millwood). 2007;26(6):1586-1598.
4. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148(2):111-123.
5. Rosenthal MB, Landon BE, Normand SL, Frank RG, Epstein AM. Pay for performance in commercial HMOs. N Engl J Med. 2006;355(18):1895-1902.
6. Sorbero MES, Damberg CL, Shaw R, et al. Assessment of Pay-for-Performance Options for Medicare Physician Services: Final Report. Washington, DC: US Dept of Health and Human Services; 2006. RAND working paper series prepared for the Assistant Secretary for Planning and Evaluation.
7. Damberg CL, Sorbero MES, Mehrotra A, Teleki S, Lovejoy S, Bradley L. An Environmental Scan of Pay for Performance in the Hospital Setting: Final Report. Washington, DC: US Dept of Health and Human Services; 2007. RAND working paper series prepared for the Assistant Secretary for Planning and Evaluation.
8. Damberg CL, Raube K, Williams T, Shortell SM. Paying for performance: implementing a statewide project in California. Qual Manag Health Care. 2005;14(2):66-79.
9. Rittenhouse DR, Casalino LP, Gillies RR, Shortell SM, Lau B. Measuring the medical home infrastructure in large medical groups. Health Aff (Millwood). 2008;27(5):1246-1258.
10. Shortell SM, Gillies RR, Siddique J, et al. Improving chronic illness care: a longitudinal cohort analysis of large physician organizations. Med Care. 2009;47(9):932-939.
11. Integrated Healthcare Association Web site. http://www.iha.org. Accessed July 21, 2009.
12. Reeves D, Campbell SM, Adams J, Shekelle PG, Kontopantelis E, Roland MO. Combining multiple indicators of clinical quality: an evaluation of different analytic approaches. Med Care. 2007;45(6):489-496.
13. National Committee for Quality Assurance. HEDIS 2007 Technical Specifications. Washington, DC: National Committee for Quality Assurance; 2007. No. 10284-100-07.
14. Wagner EH, Austin BT, Davis C, Hindmarsch M, Schaefer J, Bonomi A. Improving chronic illness care: translating evidence into action. Health Aff (Millwood). 2001;20(6):64-78.
15. Wagner EH, Glasgow RE, Davis C, et al. Quality improvement in chronic illness care: a collaborative approach. Jt Comm J Qual Improv. 2001;27(2):63-80.
16. Brailar DJ. Health information technology is a vehicle, not a destination: a conversation with David J. Brailar: interview by Arnold Milstein. Health Aff (Millwood). 2007;26(2):w236-w241.
17. Blumenthal D, Glaser JP. Information technology comes to medicine. N Engl J Med. 2007;356(24):2527-2534.
18. Casalino L, Gillies RR, Shortell SM, et al. External incentives, information technology, and organized processes to improve health care quality for patients with chronic diseases. JAMA. 2003;289(4):434-441.
19. Gans D, Kralewski J, Hammons T, Dowd B. Medical groups’ adoption of electronic health records and information systems. Health Aff (Millwood). 2005;24(5):1323-1333.
20. Bridges to Excellence Web site. http://www.bridgestoexcellence.org. Accessed October 25, 2007.
21. Lin MK, Marsteller JA, Shortell SM, et al. Motivation to change chronic illness care: results from a national evaluation of quality improvement collaboratives. Health Care Manage Rev. 2005;30(2):139-156.
22. Fleming B, Silver A, Ocepek-Welikson K, Keller D. The relationship between organizational systems and clinical quality in diabetes care. Am J Manag Care. 2004;10(12):934-944.
23. Shojania KG, Ranji SR, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA. 2006;296(4):427-440.
24. Mehrotra A, Epstein AM, Rosenthal MB. Do integrated medical groups provide higher-quality medical care than individual practice associations? Ann Intern Med. 2006;145(11):826-833.
25. Sequist TD, von Glahn T, Li A, Rogers WH, Safran DG. Statewide evaluation of measuring physician delivery of self-management support in chronic disease care. J Gen Intern Med. 2009;24(8):939-945.
26. Rodriguez HP, von Glahn T, Rogers WH, Safran DG. Organizational and market influences on physician performance on patient experience measures. Health Serv Res. 2009;44(3):880-901.
27. Zazzali JL, Alexander JA, Shortell SM, Burns LR. Organizational culture and physician satisfaction with dimensions of group practice. Health Serv Res. 2007;42(3, pt 1):1150-1176.