Currently Viewing:
The American Journal of Managed Care January 2017
Alignment of Breast Cancer Screening Guidelines, Accountability Metrics, and Practice Patterns
Tracy Onega, PhD; Jennifer S. Haas, MD; Asaf Bitton, MD; Charles Brackett, MD; Julie Weiss, MS; Martha Goodrich, MS; Kimberly Harris, MPH; Steve Pyle, BS; and Anna N. A. Tosteson, ScD
The Challenge of Paying for Cost-Effective Cures
Patricia J. Zettler, JD, and Erin C. Fuse Brown, JD, MPH
An Expanded Portfolio of Survival Metrics for Assessing Anticancer Agents
Jennifer Karweit, MS; Srividya Kotapati, PharmD; Samuel Wagner, PhD; James W. Shaw, PhD, PharmD, MPH; Steffan W. Wolfe, BA; and Amy P. Abernethy, MD, PhD
The Social Value of Childhood Vaccination in the United States
Tomas J. Philipson, PhD; Julia Thornton Snider, PhD; Ayman Chit, PhD; Sarah Green, BA; Philip Hosbach, BA; Taylor Tinkham Schwartz, MPH; Yanyu Wu, PhD; and Wade M. Aubry, MD
Currently Reading
Value-Based Payment in Implementing Evidence-Based Care: The Mental Health Integration Program in Washington State
Yuhua Bao, PhD; Thomas G. McGuire, PhD; Ya-Fen Chan, PhD; Ashley A. Eggman, MS; Andrew M. Ryan, PhD; Martha L. Bruce, PhD, MPH; Harold Alan Pincus, MD; Erin Hafer, MPH; and Jürgen Unützer, MD, MPH,
The Effect of Massachusetts Health Reform on Access to Care for Medicaid Beneficiaries
Laura G. Burke, MD, MPH; Thomas C. Tsai, MD, MPH; Jie Zheng, PhD; E. John Orav, PhD; and Ashish K. Jha, MD, MPH
The Value of Survival Gains in Myelodysplastic Syndromes
Joanna P. MacEwan, PhD; Wes Yin, PhD; Satyin Kaura, MSci, MBA; and Zeba M. Khan, PhD
Electronic Health Records and the Frequency of Diagnostic Test Orders
Ibrahim Hakim, BBA; Sejal Hathi, BS; Archana Nair, MS; Trishna Narula, MPH; and Jay Bhattacharya, MD, PhD
An Assessment of the CHIP/Medicaid Quality Measure for ADHD
Justin Blackburn, PhD; David J. Becker, PhD; Michael A. Morrisey, PhD; Meredith L. Kilgore, PhD; Bisakha Sen, PhD; Cathy Caldwell, MPH; and Nir Menachemi, PhD, MPH

Value-Based Payment in Implementing Evidence-Based Care: The Mental Health Integration Program in Washington State

Yuhua Bao, PhD; Thomas G. McGuire, PhD; Ya-Fen Chan, PhD; Ashley A. Eggman, MS; Andrew M. Ryan, PhD; Martha L. Bruce, PhD, MPH; Harold Alan Pincus, MD; Erin Hafer, MPH; and Jürgen Unützer, MD, MPH,
Value-based payment improved fidelity to key elements of the Collaborative Care Model-an evidence-based mental health intervention-and improved patient depression outcomes in Washington state.
ABSTRACT

Objectives: To assess the role of value-based payment (VBP) in improving fidelity and patient outcomes in community implementation of an evidence-based mental health intervention, the Collaborative Care Model (CCM). 
 
Study Design: Retrospective study based on a natural experiment. 
 
Methods: We used the clinical tracking data of 1806 adult patients enrolled in a large implementation of the CCM in community health clinics in Washington state. VBP was initiated in year 2 of the program, creating a natural experiment. We compared implementation fidelity (measured by 3 process-of-care elements of the CCM) between patient-months exposed to VBP and patient-months not exposed to VBP. A series of regressions were estimated to check robustness of findings. We estimated a Cox proportional hazard model to assess the effect of VBP on time to achieving clinically significant improvement in depression (measured based on changes in depression symptom scores over time).     
 
Results: Estimated marginal effects of VBP on fidelity ranged from 9% to 30% of the level of fidelity had there been no exposure to VBP (P <.05 for every fidelity measure). Improvement in fidelity in response to VBP was greater among providers with a larger patient panel and among providers with a lower level of fidelity at baseline. Exposure to VBP was associated with an adjusted hazard ratio of 1.45 (95% confidence interval, 1.04-2.03) for achieving clinically significant improvement in depression.
 
Conclusions: VBP improved fidelity to key elements of the CCM, both directly incentivized and not explicitly incentivized by the VBP, and improved patient depression outcomes.

Am J Manag Care. 2017;23(1):48-53
Take-Away Points

Value-based payment (VBP), when embedded in implementation initiatives, has the potential to improve fidelity to evidence-based care and patient outcomes.
  • VBP may improve fidelity to evidence-based care by closely mapping quality targets with key elements of the evidence-based model. 
  • Managed care organizations may consider using VBP as a strategy to enhance the effectiveness of implementation of evidence-based care among provider networks.
  • Additional support may be needed to help providers with a smaller patient panel to adequately respond to and benefit from VBP; innovations of VBP design, such as a multi-tiered system, may be needed to provide sufficient incentives to providers with higher performance at baseline.
Despite extraordinary increases in medical knowledge, healthcare in the United States frequently falls short of evidence-based standards.1 Implementation fidelity is defined as the degree to which interventions or programs are implemented as intended by the program developers2; and poor implementation fidelity is one explanation of why the promise of evidence-based medicine remains unfulfilled.3,4 Substantial variation among providers exists in the intensity of implementation and degree of fidelity to the evidence base,5-8 and current financial incentives in the US healthcare system contribute to a poor “business case” to adopt evidence-based practices in an effective manner.9 Targeted financial incentives have the potential to improve fidelity and improve implementation effectiveness.

Value-based payment (VBP)—a form of pay for performance—incentivizes quality and outcomes of care by tying payment to providers with predefined quality or efficiency targets. VBP has been widely adopted by private and public payers, with recent examples including Medicare’s launch of the hospital VBP program in 201310 and the HHS secretary's announcement of measurable goals and a timeline to move the healthcare system at large toward quality-based payment.11 Most existing VBP programs provide standalone financial incentives without a support system to providers for care process redesign. VBPs, embedded in evidence-based care and designed to improve implementation effectiveness, are rare12-15 and not extensively studied.

In this study, we assessed if a VBP component could improve the effectiveness of Collaborative Care Model (CCM) implementation. The CCM is a team-based approach to treating depression and other common behavioral health conditions in primary care,16-18 with the team including a primary care physician, care manager, and a consulting psychiatrist. The key principles of the model include systematic follow-up on patients by the care manager, measurement-based care that uses symptom-rating scales to track clinical improvements or identify patients not improving,19 and “stepped care,”20 where treatment is systematically adjusted or intensified by the primary care team (with input from the consulting psychiatrist) for patients not improving.21 CCM implementation has gained momentum: in its proposed rules for the 2016 Physician Fee Schedule, CMS stated an intention to modify current payment to cover the CCM.22

We conducted our assessment in the context of the Mental Health Integration Program (MHIP) of Washington state.23 The MHIP is an ongoing, publicly funded implementation of CCM in a diverse network of community health clinics across Washington. Started in 2008 in the 2 most urban counties in the state, it is now statewide and has served over 35,000 individuals. The VBP component of the MHIP payment started in 2009 in response to substantial variation in quality of care and patient outcomes seen in the first year. It adopted some best practice design features of VBP24-26: close mapping of quality measures with key elements of the evidence-based model, substantial incentive payment size (25% of total payments to providers), and a dynamic set of quality measures and targets that are adjusted over time. Thus, a unique opportunity is offered to assess VBP's role in improving implementation effectiveness of evidence-based care in community settings.

We hypothesized that MHIP VBP improved fidelity to the CCM, as measured by key process-of-care elements of the model, both directly and not explicitly incentivized by the VBP. We further hypothesized that MHIP VBP improved patient depression outcomes. Provider organizations with a larger patient panel have more at stake under a VBP scheme and greater resources to invest in quality improvement in response to VBP.27 Providers with a lower level of performance at baseline have more room and motivation to improve.28-30 We thus hypothesized that the effect of VBP on implementation fidelity was greater among clinics with a larger MHIP patient caseload and among clinics with a lower level of fidelity prior to VBP.

METHODS

Study Period, Population, and Data

This study focused on phase 1 of the MHIP VBP. Phase 1 (covering year 2009) used 4 process-of-care targets mapping closely with the principles of the CCM (Table 1). Provider organizations would initially receive 75% of their total payment for the CCM (ie, with 25% holdback) and receive 5% for achieving each of the 4 targets in a calendar quarter, with an additional 5% being awarded for participation. Adjustments to the VBP scheme were made in subsequent phases by raising benchmarks for existing targets, eliminating targets that had been achieved by most provider organizations, and/or adding new targets to address emerging gaps in quality, thus providing incentives for continuous improvement.

Our study population comprised patients 18 years or older who initiated care in the MHIP between January 1, 2008, and June 30, 2009, in 1 of the 35 community health clinics that started MHIP implementation in 2008—therefore, they had experience with MHIP both before and after the launch of phase 1 VBP. We restricted patient enrollment in the MHIP to June 30, 2009, to ensure that the first 6 months of care for every patient enrolled in 2009 were under the influence of phase 1 (not phase 2) of VBP. All 35 clinics were located in King and Pierce counties—the 2 most populous counties in Washington state—and were affiliated with 7 community health centers. These community health centers were the parent organizations of the clinics and were Federally Qualified Health Centers. The populations they served were primarily patients with Medicaid, other state-funded programs, and patients who were uninsured. Clinical social workers, psychologists, licensed mental health counselors, and other clinicians staffed at the clinics served as the CCM care managers in the MHIP.

Patient inclusion criteria included a baseline Patient Health Questionnaire ([PHQ-9] with a possible range of 0-27)31 score of 10 or greater—indicating clinically significant depression—and at least 1 follow-up contact with the MHIP care manager within 24 weeks of the initial contact in order to allow at least 1 chance to assess depression outcomes. Patients whose last contact with MHIP occurred within 1 week from the first contact were further excluded as they likely were determined ineligible for MHIP. The vast majority of the MHIP patients during our study period were enrollees in Washington state’s Disability Lifeline Program. These patients were temporarily disabled because of a physical or mental health condition and expected to be unemployed for 90 days or more. King County extended eligibility to additional patient populations including low-income mothers and their children, low-income older adults, uninsured, veterans, and veterans’ family members.

Our data were from the Web-based registry32 used by all MHIP participating clinics to systematically document care management activities and clinical outcomes and to assist with population management.

Measures

Three dichotomous measures at the patient-month level captured fidelity to major domains of the CCM (Table 1). “At least 1 follow-up contact with the care manager” reflects the principle of systematic follow-up; “at least 1 psychiatric consultation” reflects the principle of stepped care—the idea that treatment should be systematically changed or intensified for patients not responding to initial treatment. An important mechanism by which stepped care is operationalized in CCM is through consultation with a mental health specialist (usually a psychiatrist) for potential treatment changes. These 2 measures were closely related to the 2 quality targets in phase 1 of MHIP VBP (Table 1). “At least 1 PHQ-9 assessment” reflects the principle of measurement-based care, whereby treatment teams use symptom rating scales to systematically track clinical improvements or lack thereof. This measure was not explicitly incentivized in MHIP VBP. Data on current medications (also documented in the MHIP registry) are not available for research at this point. We therefore had no fidelity measure mapping the fourth VBP measure, documentation of current psychiatric medication in registry for 75% of cases (Table 1). Each fidelity measure was assessed for each 4-week interval starting from the patient’s initial contact with the MHIP care manager, up to 24 weeks or until the patient’s last contact with the care manager, whichever occurred first.

We tested hypotheses about 2 potential modifiers of the effects of VBP on fidelity: size of the MHIP patient panel, measured by cumulative number of patients treated at the clinic prior to phase 1 VBP (ie, in 2008); and clinic-level fidelity at baseline, measured by the average count of follow-up contacts, psychiatric consultation, or PHQ assessments over the first months of all patients treated at the clinic in 2008.

A clinically significant improvement in depression was defined as achieving a follow-up PHQ-9 score under 10 or achieving at least 50% reduction in PHQ-931 within 24 weeks of initial care manager contact.

Statistical Analysis

The rolling enrollment of patients in the MHIP throughout our study period created a natural experiment in the sense that exposure of patient episodes of care to VBP resembled random assignment. We compared the fidelity outcomes for patient-months exposed to VBP with patient-months not exposed to VBP. We estimated a multi-level linear probability model for each fidelity outcome with a random intercept at the patient level to account for clustering of months of the same patient. The key independent variable was a dichotomous indicator of VBP exposure, defined as 1 if the index patient-month started after January 1, 2009—when phase 1 VBP took effect—and 0 otherwise. Because care management activities were more intensive in the early months of a patient’s treatment episode,33 our adjusted analysis controlled for a set of dummy variables indicating whether the index month was the patient’s first through sixth month in MHIP. We conducted several sensitivity analyses. We restricted the sample to Disability Lifeline Program enrollees, which accounted for 85% of the entire sample. We also estimated the linear probability model with a fixed effect for each patient, first with the entire sample (ie, regardless of whether a patient had exposure to VBP), then restricting to patients who contributed months both pre- and post VBP during the first 24 weeks of care. The latter approach allowed us to examine the effects of VBP on treatment fidelity within the same patient, but was conducted with a much smaller sample size (about one-third of the original sample), and thus subject to lower precision in estimates. We also estimated the logistic version of each model and compared implications (eg, marginal effect of VBP) based on both sets of analyses.

 
Copyright AJMC 2006-2017 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up
×

Sign In

Not a member? Sign up now!