Currently Viewing:
The American Journal of Managed Care January 2017
Alignment of Breast Cancer Screening Guidelines, Accountability Metrics, and Practice Patterns
Tracy Onega, PhD; Jennifer S. Haas, MD; Asaf Bitton, MD; Charles Brackett, MD; Julie Weiss, MS; Martha Goodrich, MS; Kimberly Harris, MPH; Steve Pyle, BS; and Anna N. A. Tosteson, ScD
The Challenge of Paying for Cost-Effective Cures
Patricia J. Zettler, JD, and Erin C. Fuse Brown, JD, MPH
An Expanded Portfolio of Survival Metrics for Assessing Anticancer Agents
Jennifer Karweit, MS; Srividya Kotapati, PharmD; Samuel Wagner, PhD; James W. Shaw, PhD, PharmD, MPH; Steffan W. Wolfe, BA; and Amy P. Abernethy, MD, PhD
The Social Value of Childhood Vaccination in the United States
Tomas J. Philipson, PhD; Julia Thornton Snider, PhD; Ayman Chit, PhD; Sarah Green, BA; Philip Hosbach, BA; Taylor Tinkham Schwartz, MPH; Yanyu Wu, PhD; and Wade M. Aubry, MD
Currently Reading
Value-Based Payment in Implementing Evidence-Based Care: The Mental Health Integration Program in Washington State
Yuhua Bao, PhD; Thomas G. McGuire, PhD; Ya-Fen Chan, PhD; Ashley A. Eggman, MS; Andrew M. Ryan, PhD; Martha L. Bruce, PhD, MPH; Harold Alan Pincus, MD; Erin Hafer, MPH; and Jürgen Unützer, MD, MPH, MA
The Effect of Massachusetts Health Reform on Access to Care for Medicaid Beneficiaries
Laura G. Burke, MD, MPH; Thomas C. Tsai, MD, MPH; Jie Zheng, PhD; E. John Orav, PhD; and Ashish K. Jha, MD, MPH
The Value of Survival Gains in Myelodysplastic Syndromes
Joanna P. MacEwan, PhD; Wes Yin, PhD; Satyin Kaura, MSci, MBA; and Zeba M. Khan, PhD
Electronic Health Records and the Frequency of Diagnostic Test Orders
Ibrahim Hakim, BBA; Sejal Hathi, BS; Archana Nair, MS; Trishna Narula, MPH; and Jay Bhattacharya, MD, PhD
An Assessment of the CHIP/Medicaid Quality Measure for ADHD
Justin Blackburn, PhD; David J. Becker, PhD; Michael A. Morrisey, PhD; Meredith L. Kilgore, PhD; Bisakha Sen, PhD; Cathy Caldwell, MPH; and Nir Menachemi, PhD, MPH

Value-Based Payment in Implementing Evidence-Based Care: The Mental Health Integration Program in Washington State

Yuhua Bao, PhD; Thomas G. McGuire, PhD; Ya-Fen Chan, PhD; Ashley A. Eggman, MS; Andrew M. Ryan, PhD; Martha L. Bruce, PhD, MPH; Harold Alan Pincus, MD; Erin Hafer, MPH; and Jürgen Unützer, MD, MPH, MA
Value-based payment improved fidelity to key elements of the Collaborative Care Model—an evidence-based mental health intervention—and improved patient depression outcomes in Washington state.
Hypotheses regarding the modifying effects were tested by adding interaction terms between the VBP indicator and the hypothesized modifier to the models. We included an additional interaction between VBP and the quadratic term of the modifier (eg, cumulative caseload squared) to allow for nonlinear effects. The VBP incentives were directed at the community health centers (also known as “implementation sites” in MHIP)—7 in total in our study sample; each site had multiple clinics. We conducted sensitivity analysis where hypothesized modifiers (caseload and baseline fidelity) were measured at the site level.

To assess the association between VBP and depression outcomes, we estimated an extended Cox proportional hazard model of time to improvement in depression, censored at 24 weeks after the initial assessment/contact, or the patient’s last contact with the MHIP care manager, whichever occurred first. The key independent variable was an indicator of VBP exposure (ie, 1 after January 1, 2009, and 0 otherwise). This indicator varied over time (switched from 0 to 1) for patients who enrolled in MHIP in 2008, but whose observation period ended in 2009. We conducted tests of the proportional hazard assumption based on the Schoenfeld residuals.34

Adjusted analyses controlled for baseline patient age and gender, baseline PHQ-9 scores and comorbid behavioral health conditions, MHIP eligibility categories, and clinic fixed effects (to control for between-clinic differences in quality of care). To control for possible clinic learning over time, we also included the number of months the clinic had been participating in MHIP at the time the index patient was enrolled in MHIP and its quadratic and cubic terms.

RESULTS

Table 2 presents descriptive statistics of baseline patient characteristics for the entire sample (n = 1806) and by whether a patient had at least 1 month of exposure to VBP within 24 weeks since their first contact with MHIP care managers. Patients with no exposure to VBP were more likely to be enrollees in the Disability Lifeline program than patients with at least 1 month of exposure (96.2% vs 79.7%). Partly because of this difference, patients with no exposure were more likely to be aged between 40 and 59 years and more likely to have a PHQ-9 score of 20 or higher (indicating severe depression symptoms) compared with patients without any exposure. Prevalence of comorbid behavioral conditions was comparable between the 2 cohorts, except that rates of anxiety and bipolar disorders were slightly lower among the no-exposure group than the exposed group.

For the fidelity outcomes, results of multivariate analyses were presented in the form of predicted probabilities with and without exposure to VBP and the marginal effect of VBP (Table 3). (The mixed-effects linear probability models and their logistic counterparts generated very similar results; results reported hereafter and in Table 3 were based on the linear probability models.) Based on analyses conducted with the entire sample, the effect of VBP on the probability of at least 1 follow-up contact, psychiatric consultation, and PHQ-9 assessment in a month was an increase of 0.05 (95% confidence interval (CI), 0.00-0.10; P <.05), 0.04 (95% CI, 0.00-0.07; P <.05), and 0.07 (95% CI, 0.02-0.11; P <.05), respectively. The magnitude of the increase was about 9%, 30%, and 15% of the respective fidelity outcome had there been no exposure to VBP. Analysis restricting to Disability Lifeline enrollees (85% of the unrestricted sample) generated very similar results, with slightly smaller marginal effects of VBP for follow-up contacts and PHQ assessments, but slightly greater marginal effect for psychiatric consultation (Table 3).

Sensitivity analysis of the fidelity outcomes by controlling for patient fixed effects produced similar or slightly greater point estimates of the marginal effects of VBP (eAppendix Table [eAppendices available at ajmc.com]). However, fixed-effects analysis, restricted to patients who received care both before and after VBP, had a much-reduced sample size (359 patients compared with 1806 in the unrestricted fixed-effects analysis). Marginal effects of VBP were not statistically significant for follow-up contacts or psychiatric consultation, but remained strong for PHQ assessments (marginal effect of VBP: 0.09; 95% CI, 0.02-0.16).

Our analysis indicated that both the size of the MHIP caseload at a clinic and the level of fidelity prior to VBP modified the effect of the VBP. As shown in eAppendix Figure A, for follow-up contacts and PHQ assessments, the marginal effect of VBP increased with the number of patients treated at the clinic prior to VBP. For follow-up contacts, the marginal effect of VBP did not achieve statistical significance until the number of patients treated at clinic in 2008 was 100 or more (top 25% of clinics); and for PHQ assessments, not until at least 140 (top 10% of clinics). Caseload did not seem to modify the VBP effect for psychiatric consultation. On the other hand, for each fidelity measure, the marginal effect of VBP decreased with the level of fidelity at the clinic prior to the start of VBP (eAppendix Figure B). For example, the effect of VBP on follow-up contacts was significantly greater than 0 only among clinics whose first-month follow-up contacts in 2008 averaged below 0.8 (accounting for 75% of all clinics). Sensitivity analysis defining modifiers at the implementation site level produced largely consistent findings. One exception was that, for psychiatric consultation, the VBP effect seemed to decrease with the size of MHIP caseload at the site level, whereas there was no clear modifying effect of caseload defined at the clinic level.

Consistent with results for the fidelity outcomes, exposure to VBP was associated with an adjusted hazard ratio (HR) of 1.45 (95% CI, 1.04-2.03) for achieving clinically significant improvement in depression, indicating that exposure to VBP was associated with a shorter time to improvement. This result held when we restricted the sample to Disability Lifeline patients (adjusted HR, 1.47; 95% CI, 1.03-2.12).

DISCUSSION

With data from a statewide implementation of the CCM in community health clinics, we found that a VBP program embedded in community-based implementation improved fidelity to several key process-of-care elements of the evidence-based model, both directly incentivized and not explicitly incentivized by the VBP. Consistent with our hypotheses, we also found stronger responses to VBP among provider organizations that cared for a larger number of patients and among organizations with a lower level of initial fidelity. Finally, we found that VBP led to better patient outcomes indicated by a shorter time to clinically significant improvement in depressive symptoms. 

Our VBP effect findings contrast with the limited evidence supporting the effectiveness of existing VBP programs.15,25,35-37 Several reasons may underlie the differences. First, the MHIP paired financial incentives with chronic care quality improvement and capacity-building efforts. An expert team at the University of Washington provided training to care managers from all participating community health clinics and made archived training materials available online, and consulting psychiatrists were arranged in a contractual relationship to work with all participating clinics. Implementation of a clinical tracking system—a crucial tool to enable population health management and case tracking—was a precondition for receiving funding for MHIP and was achieved at all clinics. Meanwhile, existing VBP contracts typically provide no support system for quality improvement. Second, the MHIP VBP targeted several key elements of a single evidence-based care model, focusing improvement efforts and sending strong signals and clear directions to provider organizations on what to improve. Existing VBP programs typically contain a large number of quality targets that may not be clinically meaningful, dissipating incentives and failing to engage clinicians.38

Monthly PHQ assessments—the measure not explicitly incentivized under MHIP VBP—improved by 15% in response to VBP. Because PHQ assessment was conducted at follow-up contacts with the care manager, incentivizing systematic follow-up may have had the “spillover effects” of incentivizing these assessments. A direct implication is that, for fidelity/process-of-care measures that complement one another, designers of VBP programs may consider recognizing some, but not all of them, as VBP targets. Keeping the target set parsimonious (and thus not diluting incentives) may not need to come at the price of forsaking important quality goals.

Clinics with a larger patient caseload responded to VBP more in 2 of 3 fidelity measures considered, suggesting that smaller clinics may perceive insufficient incentives because of the limited scope of their VBP payment25 and/or may lack the resources to make systematic changes to care in response to VBP. To ensure that provider organizations of all sizes (and their patients) benefit from VBP, implementation initiatives may consider pooling resources, for example, by establishing learning collaboratives and providing coaching and consultation to organizations in need. Consistent with our hypothesis and findings of previous studies,25,28-30 lower baseline fidelity was associated with greater improvement in fidelity in response to VBP, thus reducing the variation in fidelity/quality among provider organizations implementing the evidence-based model. Although a desirable outcome, it also reveals the fact that, with a single performance target for all providers, high-performers may not be adequately motivated to improve further even though there is still room for improvement. One option would be to have 2 sets of thresholds to be applied to provider organizations with different initial levels of performance.39 This option, however, adds to the complexity of the VBP and may be perceived unfair or unacceptable, especially by high-performing provider organizations.

Limitations

We assessed the effects of VBP in a natural experiment, not a randomized trial. Although we controlled for important differences in patient baseline characteristics, differences among clinics that do not change over time (with clinic fixed effects), and proxies for provider learning over time, these controls were not perfect. However, a series of sensitivity analyses demonstrated the robustness of our findings. The fidelity and patient outcomes we examined were subject to data availability and usability; we were not able to examine antidepressant management as an important fidelity outcome at this point, but intend to do so in the future when data become available. We used data from the implementation of a specific care management model in a single state, which potentially limited the generalizability of our findings to other evidence-based care approaches or other geographic areas. This limitation, however, is mitigated by the fact that the CCM is highly consistent with the Chronic Care Model40,41 and that the MHIP involved a large and diverse set of provider organizations.

CONCLUSIONS

Our study provided strong evidence that a VBP component adopting best practices of VBP design and being embedded in an implementation initiative is effective in improving fidelity to key elements of the evidence-based model, both directly and not directly incentivized by the VBP, and, in turn, improving patient outcomes. 

Author Affiliations: Department of Healthcare Policy and Research (YB, AAE), and Department of Psychiatry (YB), Weill Cornell Medical College, New York, NY; Department of Health Care Policy, Harvard Medical School (TGM), Boston, MA; Department of Psychiatry and Behavioral Science, School of Medicine, University of Washington (YFC, JU), Seattle, WA; Department for Health Management and Policy, School of Public Health, University of Michigan (AMR), Ann Arbor, MI; Department of Psychiatry and the Dartmouth Institute of Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth University (MLB), Lebanon, NH; Department of Psychiatry, Columbia University Medical Center (HAP), New York, NY; New York-Presbyterian Hospital (HAP), New York, NY; Community Health Plan of Washington (EH), Seattle, WA.
 
Source of Funding: This study is supported by the National Institute of Mental Health (1R01MH104200; YB, TGM, YFC, AAE, AMR, MLB, HAP, JU). 
 
Author Disclosures: Drs Chan and Unützer received salary support from Community Health Plan of Washington for training, clinical consultation, and quality improvement efforts related to the Mental Health Integration Program. Ms Hafer is employed by Community Health Plan of Washington, the sponsor of the care management program studied in the paper. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article. 
 
Authorship Information: Concept and design (YB, TGM, YC, AMR, MLB, EH, JU); acquisition of data (YB, YC, AAE, EH, JU); analysis and interpretation of data (YB, TGM, YC, AMR, MLB, HAP, EH, JU); drafting of the manuscript (YB, TGM, AMR, MLB, EH, JU); critical revision of the manuscript for important intellectual content (YB, TGM, YC, MLB, HAP, EH, JU); statistical analysis (YB, TGM); provision of patients or study materials (EH, JU); obtaining funding (YB, JU); administrative, technical, or logistic support (AAE); and supervision (YB, JU).
 
Address Correspondence to: Yuhua Bao, PhD, Weill Cornell Medical College, 402 E 67th St, New York, NY 10065. E-mail: yub2003@med.cornell.edu.
REFERENCES

1. Blumenthal D, McGinnis JM. Measuring Vital Signs: an IOM report on core metrics for health and health care progress. JAMA. 2015;313(19):1901-1902. doi: 10.1001/jama.2015.4862.
 
2. Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res. 2003;18(2):237-256.
 
3. Chin MH, Cook S, Drum ML, et al; Midwest Cluster Health Disparities Collaborative. Improving diabetes care in midwest community health centers with the health disparities collaborative. Diabetes Care. 2004;27(1):2-8.
 
4. Landon BE, Hicks LS, O’Malley AJ, et al. Improving the management of chronic disease at community health centers. N Engl J Med. 2007;356(9):921-934.
 
5. Rubenstein LV, Jackson-Triche M, Unützer J, et al. Evidence-based care for depression in managed primary care practices. Health Aff (Millwood). 1999;18(5):89-105.
 
6. Lichtman JH, Roumanis SA, Radford MJ, Riedinger MS, Weingarten S, Krumholz HM. Can practice guidelines be transported effectively to different settings? results from a multicenter interventional study. Jt Comm J Qual Improv. 2001;27(1):42-53.
 
7. Pearson M, Wu S, Schaefer J, et al. Assessing the implementation of the chronic care model in quality improvement collaboratives. Health Serv Res. 2005;40(4):978-996.
 
8. Chin MH, Auerbach SB, Cook S, et al. Quality of diabetes care in community health centers. Am J Public Health. 2000;90(3):431-434.
 
9. Leatherman S, Berwick D, Iles D, et al. The business case for quality: case studies and an analysis. Health Aff (Millwood). 2003;22(2):17-30.
 
10. Hospital value-based purchasing. CMS website. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/hospital-value-based-purchasing/index.html?redirect=/Hospital-Value-Based-Purchasing. Published October 30, 2015. Accessed January 28, 2016.
 
11. Better, smarter, healthier: in historic announcement, HHS sets clear goals and timeline for shifting Medicare reimbursements from volume to value [press release]. Washington, DC: HHS; January 26, 2015. http://www.hhs.gov/about/news/2015/01/26/better-smarter-healthier-in-historic-announcement-hhs-sets-clear-goals-and-timeline-for-shifting-medicare-reimbursements-from-volume-to-value.html. Accessed December 2016.
 
12. An LC, Bluhm JH, Foldes SS, et al. A randomized trial of a pay-for-performance program targeting clinician referral to a state tobacco quitline. Arch Intern Med. 2008;168(18):1993-1999. doi: 10.1001/archinte.168.18.1993.
 
13. Beaulieu ND, Horrigan DR. Putting smart money to work for quality improvement. Health Serv Res. 2005;40(5, pt 1):1318-1834.
 
14. Roski J, Turbyville S, Dunn D, Krushat M, Scholle SH. Resource use and associated care effectiveness results for people with diabetes in managed care organizations. Am J Med Qual. 2008;23(5):365-374. doi: 10.1177/1062860608316180.
 
15. Scott A, Sivey P, Ait Ouakrim D, et al. The effect of financial incentives on the quality of health care provided by primary care physicians. Cochrane Database Syst Rev. 2011;(9):CD008451. doi: 10.1002/14651858.CD008451.pub2.
 
16. Archer J, Bower P, Gilbody S, et al. Collaborative care for depression and anxiety problems. Cochrane Database Syst Rev. 2012;10:CD006525. doi: 10.1002/14651858.CD006525.pub2.
 
17. Gilbody S, Bower P, Fletcher J, Richards D, Sutton AJ. Collaborative care for depression: a cumulative meta-analysis and review of longer-term outcomes. Arch Intern Med. 2006;166(21):2314-2321.
 
18. Williams LS, Kroenke K, Bakas T, et al. Care management of poststroke depression: a randomized, controlled trial. Stroke. 2007;38(3):998-1003.
 
19. Trivedi MH. Tools and strategies for ongoing assessment of depression: a measurement-based approach to remission. J Clin Psych. 2009;70(suppl 6):26-31. doi: 10.4088/JCP.8133su1c.04.
 
20. Von Korff M, Tiemens B. Individualized stepped care of chronic illness. West J Med. 2000;172(2):133-137.
 
21. Katon W, Unutzer J, Wells K, Jones L. Collaborative depression care: history, evolution and ways to enhance dissemination and sustainability. Gen Hosp Psych. 2010;32(5):456-464. doi: 10.1016/j.genhosppsych.2010.04.001.
 
22. CMS; HHS. Medicare program; revisions to payment policies under the physician fee schedule and other revisions to part B for CY 2016. Fed Regist. 2015;80:41685-41966.
 
23. What is MHIP? Integrated Care NW, Washington State Mental Health Integration Program website. http://integratedcare-nw.org/index.html. Accessed January 28, 2016.
 
24. Christianson JB, Leatherman S, Sutherland K. Lessons from evaluations of purchaser pay-for-performance programs: a review of the evidence. Med Care Res Rev. 2008;65(suppl 6): S5-S35. doi: 10.1177/1077558708324236.
 
25. Van Herck P, De Smedt D, Annemans L, Remmen R, Rosenthal MB, Sermeus W. Systematic review: effects, design choices, and context of pay-for-performance in health care. BMC Health Serv Res. 2010;10:247. doi: 10.1186/1472-6963-10-247.
 
26. de Bruin SR, Baan CA, Struijs JN. Pay-for-performance in disease management: a systematic review of the literature. BMC Health Serv Res. 2011;11:272. doi: 10.1186/1472-6963-11-272.
 
27. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581-629.
 
28. Greene J, Hibbard JH, Overton V. Large performance incentives had the greatest impact on providers whose quality metrics were lowest at baseline. Health Aff (Millwood). 2015;34:673-680.
 
29. Li J, Hurley J, DeCicca P, Buckley G. Physician response to pay-for-performance: evidence from a natural experiment. Health Econ. 2014;23(8):962-978. doi: 10.1002/hec.2971.
 
30. Rosenthal MB, Frank RG, Li Z, Epstein AM. Early experience with pay-for-performance: from concept to practice. JAMA. 2005;294(14):1788-1793.
 
31. Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606-613.
 
32. Unutzer J, Choi Y, Cook IA, Oishi S. A web-based data management system to improve care for depression in a multicenter clinical trial. Psychiatr Serv. 2002;53(6):671-673,678.
 
33. Bao Y, Casalino LP, Ettner SL, Bruce ML, Solberg LI, Unutzer J. Designing payment for Collaborative Care for Depression in primary care. Health Serv Res. 2011;46(5):1436-1451. doi: 10.1111/j.1475-6773.2011.01272.x.
 
34. Grambsch PM, Therneau TM. Proportional hazards tests and diagnostics based on weighted residuals. Biometrika. 1994;81(3):515-526.
 
35. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long-term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366(17):1606-1615. doi: 10.1056/NEJMsa1112351.
 
36. Flodgren G, Eccles MP, Shepperd S, Scott A, Parmelli E, Beyer FR. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev. 2011;(7):CD009255. doi: 10.1002/14651858.CD009255.
 
37. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance-based remuneration for individual health care practitioners affect patient care? a systematic review. Ann Intern Med. 2012;157(12):889-899. doi: 10.7326/0003-4819-157-12-201212180-00009.
 
38. Jha AK. Time to get serious about pay for performance. JAMA. 2013;309(4):347-348. doi: 10.1001/jama.2012.196646.
 
39. Ryan AM. Will value-based purchasing increase disparities in care? N Engl J Med. 2013;369(26):2472-2474. doi: 10.1056/NEJMp1312654.
 
40. Von Korff M, Gruman J, Schaefer J, Curry SJ, Wagner EH. Collaborative management of chronic illness. Ann Intern Med. 1997;127(12):1097-1102.
 
41. Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q. 1996;74(4):511-544.
PDF
 
Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up