Multi-Payer Advanced Primary Care Practice Demonstration on Quality of Care

An evaluation of the Multi-Payer Advanced Primary Care Practice Demonstration found mixed results in terms of quality of care provided to Medicare and Medicaid beneficiaries.

ABSTRACT

Objectives: We evaluated whether primary care practices in the Medicare Multi-Payer Advanced Primary Care Practice (MAPCP) Demonstration improved the quality of care and patient outcomes for beneficiaries.

Study Design: For our quantitative analyses, we employed a pre-post study design with a comparison group using enrollment data, Medicare fee-for-service claims data, and Medicaid managed care and fee-for-service claims data, covering the period 2 to 4 years before Medicare joined the state patient-centered medical home initiatives through December 2014. We used difference-in-differences (DID) regression analysis to compare quality and outcomes in the period before and after the demonstration began.

Methods: We examined the extent to which MAPCP and comparison group beneficiaries received up to 11 process and preventive care measures, as well as 4 measures of potentially avoidable hospitalizations to assess patient outcomes.

Results: Analyses of Medicare and Medicaid data did not consistently reflect the positive impacts intended by the demonstration. Our descriptive and DID analysis found an inconsistent pattern among the process-of-care results, and there were some significant unfavorable associations between participation in MAPCP and avoidable hospitalizations.

Conclusions: Our analyses showed few statistically significant, favorable impacts on quality metrics among Medicare or Medicaid beneficiaries receiving care from MAPCP practices.

Am J Manag Care. 2019;25(9):444-449Takeaway Points

  • Impacts of the Multi-Payer Advanced Primary Care Practice (MAPCP) Demonstration on quality of care were mixed across 8 participant states.
  • Our analyses showed few statistically significant, favorable impacts of the demonstration on the quality of care received by Medicare or Medicaid beneficiaries.
  • Our null or sometimes unfavorable quality-of-care findings may mean that our comparison group practices were also improving their care processes and quality of care in general.
  • Lessons and limitations learned from the MAPCP Demonstration can help set expectations for policy makers and payers around which quality measures may be most actionable for providers and most relevant for payers in monitoring provider progress in a new demonstration.

The patient-centered medical home (PCMH) is an approach to delivering patient-centered, community-based primary care.1 The PCMH model engages all elements of healthcare (the community, health system, self-management support, delivery system design, decision support, and clinical information systems) to facilitate greater patient involvement in healthcare decisions and to deliver better-coordinated, timely, and effective care.2-5 This primary care delivery model aims to reduce unnecessary utilization and expenditures while ensuring better access and improved quality of care, with some evidence of success.6,7

Many states and payers endorse the PCMH model, including CMS through its sponsorship of the Multi-Payer Advanced Primary Care Practice (MAPCP) Demonstration. Under the MAPCP Demonstration, CMS joined 8 state-led, multipayer initiatives in Maine, Michigan, Minnesota, New York, North Carolina, Pennsylvania, Rhode Island, and Vermont in late 2011 and early 2012 to support primary care practices in their transformation to advanced primary care practices with a PCMH model at their core.8 Participating payers—Medicare, the state Medicaid agency, and commercial payers—offered participating practices a per-member per-month payment to support key transformation activities, including extending office hours, staffing care teams, coordinating care, and enhancing electronic health record capabilities. Medicare’s support not only provided additional Medicare payments to each state’s participating practices, but practices also received technical assistance and data reports. Additional details on each state’s PCMH initiative are included in Table 1.

Because the demonstration sought to improve the quality and coordination of healthcare services among participants,9 we evaluated whether these primary care practices improved the quality and safety of healthcare.8 Specifically, we assessed whether quality of care and patient outcomes for Medicare and Medicaid beneficiaries changed during the demonstration period relative to those of a comparison group using administrative claims data.

STUDY DATA AND METHODS

Study Design and Data Sources

For our quantitative analyses, we employed a pre-post study design with a comparison group. Files used included Medicare and Medicaid enrollment data, Medicare fee-for-service claims data, and Medicaid managed care and fee-for-service claims data, covering the period 2 to 4 years before Medicare joined the state initiatives through December 2014.

MAPCP and Comparison Group Identification

MAPCP practices were primary care practices selected by the states to participate in the state PCMH initiatives. Comparison group practices were nonparticipating primary care practices that did not have PCMH recognition by the National Committee for Quality Assurance. Medicare or Medicaid beneficiaries were attributed to the MAPCP or comparison practice with which they had the plurality of their primary care visits, with some caveats (eAppendix A [eAppendices available at ajmc.com]). To ensure that the comparison group closely resembled the MAPCP sample on a set of observable characteristics, comparison group data were entropy-balanced weighted based on beneficiary, practice, and geographic characteristics.10 These included sociodemographic factors (eg, age, race, dual Medicare-Medicaid enrollment, risk scores) and other factors (eg, practice type and size, percentage of primary care providers, county-level household income, population density). Reweighting resulted in the intervention and comparison groups looking more similar on all observable characteristics.

Quality-of-Care and Health Outcomes Metrics

We examined the extent to which MAPCP and comparison group beneficiaries received up to 11 quality-of-care measures. Nine of the 11 were process-of-care measures as defined using the 2013 Healthcare Effectiveness Data and Information Set Technical Specifications for Physician Measurement.11 The first set of measures examined diabetes-related services (ie, low-density lipoprotein [LDL] cholesterol test, glycated hemoglobin [A1C] test, retinal eye exam, medical attention for nephropathy) for Medicare beneficiaries aged 18 to 75 years and Medicaid beneficiaries aged 18 to 64 years with a claims-based diagnosis of type 1 or type 2 diabetes. We also included 2 all-or-nothing diabetes composite measures: one focused on comprehensive care, as defined by receiving all 4 of these diabetes-related tests, and the other focused on poor care, as defined by receiving none of these 4 diabetes-related tests.

In addition, for Medicare beneficiaries 18 years and older with a claims-based diagnosis of ischemic vascular disease, we assessed receipt of a total lipid panel test. Our third set of measures focused on the younger Medicaid population: We examined receipt of breast cancer screening among women aged 40 to 64 years, receipt of cervical cancer screening among women aged 24 to 64 years, and appropriate use of asthma medications among beneficiaries aged 5 to 64 years with persistent asthma (stratified by children and adults).

We included 4 measures of potentially avoidable hospitalizations to assess patient outcomes. Three of the measures were Agency for Healthcare Research and Quality prevention quality indicators (PQIs) developed to highlight potentially avoidable hospitalizations. We assessed PQIs by acute conditions, chronic conditions, and combined. We also calculated a measure of avoidable catastrophic events, defined as hospitalizations with the following primary diagnoses: hip fracture, acute myocardial infarction, acute cerebrovascular accident (stroke), and sepsis. We limited these measures to Medicare beneficiaries given the low frequency of PQIs or avoidable catastrophic events among our Medicaid sample.

Statistical Analyses

We used difference-in-differences (DID) regression analysis to compare outcomes in the periods before and after Medicare joined each state’s initiatives, among both beneficiaries assigned to MAPCP practices and those assigned to comparison group practices. We used logistic regression to examine the likelihood of receiving a service related to patient care and negative binomial regression to examine rates of avoidable hospitalizations and catastrophic events. In all regressions, we adjusted for patient-, practice-, and area-level characteristics (eAppendix A).

RESULTS

Analyses of Medicare and Medicaid data did not consistently reflect the positive impacts intended by the demonstration. First, in eAppendix B, we provide unadjusted performance rates for the process-of-care measures used in this evaluation. The results, covering the 4 pre- and 3 postdemonstration periods, showed complex and varied patterns depending on the quality measure and the state. In general, we found that the MAPCP practices tended to have higher quality in the predemonstration period, but improvements as well as declines in quality performance occurred for all practices over time.

Second, from the DID analysis, an inconsistent pattern emerged among the process-of-care findings (Table 2 [part A and part B]). For the diabetes quality-of-care measures, only New York’s and Minnesota’s MAPCP Medicare beneficiaries had an increased likelihood of recommended care compared with the comparison group (DID estimates for retinal eye examinations, 3.36 and 3.40, respectively; both P <.05). On the other hand, MAPCP Medicare beneficiaries in Minnesota and Michigan had lower likelihoods of total lipid panel assessment relative to the comparison group (DID estimates, —2.11 and –2.98, respectively; both P <.05).

Among the Medicaid sample only, there were also several findings of increased appropriate care among MAPCP practices (Table 2). In New York and Minnesota, the likelihood of receiving cervical cancer screening and breast cancer screening, respectively, was higher in the postdemonstration period among MAPCP beneficiaries relative to the comparison group (5.02 [P <.05] and 6.14 [P <.001], respectively). Several positive findings emerged in the MAPCP practices’ Medicaid population in Minnesota across the measures. Both Minnesota and Maine saw higher likelihood of appropriate use of asthma medication during the postdemonstration period among MAPCP children with asthma (3.45 and 11.22, respectively; both P <.05) relative to the comparison group, but no evidence was seen of improved use of asthma medication among MAPCP adults with asthma. We also found negative results for LDL cholesterol screening in Vermont and for medical attention for nephropathy in Maine; Medicaid MAPCP beneficiaries were less likely to receive these recommended services over time relative to the comparison group (DID estimates, —7.68 [P <.05] and —8.56 [P <.01], respectively).

As shown in Table 3, there were few significant associations between participation in the MAPCP and avoidable hospitalizations. In the states where there were significant findings, results were contrary to expectations. We found a positive association for the overall composite PQI admissions in Vermont (1.55; P <.01) and North Carolina (1.48; P <.05) and for avoidable catastrophic events in Maine (0.83; P <.05), suggesting that the rates of these avoidable events grew faster for the MAPCP group relative to the comparison group.

DISCUSSION

One of the goals of the MAPCP Demonstration is to improve health outcomes for all patients.2,5 Anecdotally, we heard from key stakeholders that practice transformation activities were put in place to improve patient outcomes, including the increased use of health information technology (eg, patient registries), quality measurement, and patient follow-up (particularly after an acute health event). In addition, stakeholders described onboarding care managers or a care team to improve care coordination and provider—patient communication, including outreach after hospital discharges or emergency department visits, teaching patients disease self-management, performing medication reconciliation, connecting patients to community-based services, and developing and implementing individualized care plans.

Although MAPCP Demonstration practices made concerted efforts to improve care coordination for their patients, our analyses showed few statistically significant, favorable impacts of the demonstration on process-of-care metrics among Medicare or Medicaid beneficiaries receiving care from MAPCP practices. Our outcome metrics—measured by preventable hospitalizations—also demonstrated largely nonsignificant or unexpectedly unfavorable results across the states. Thus, our results were less favorable than the impacts of some PCMH initiatives that have been previously reported,6,12-15 but they were in line with the effects documented by another report.16

Our tepid findings for the MAPCP Demonstration may be disappointing for frontline providers who believed that their practice transformation and care coordination efforts made a positive difference in their patients’ well-being. Several factors may explain the largely null findings and seeming disconnect between practices’ reported systematic use of quality improvement activities and our quantitative results. First, some practices noted that they could change practice processes (eg, outreach to patients needing preventive care) relatively quickly, but those efforts may not correlate with immediate improvements in population-based quality metrics. Our outcomes measures were broad and were not always in alignment with more narrowly focused process-of-care measures, nor were they focused on individual disease states. For example, we measured diabetes process-of-care measures but assessed preventable admissions for multiple conditions; perhaps limiting the analysis to diabetes-related admissions would have been more appropriate. We also found metrics where the quality of care declined for both groups, suggesting measurement fatigue or change in practice patterns (eg, total lipid panel). Hence, patient care may not be affected as much by demonstration participation as by practice patterns of clinical peers or by evidence-based guidelines.

Our results should not be broadly interpreted as an indication of poor quality among MAPCP practices. Rather, our DID results may suggest exogenous factors affecting quality of care for better or worse across all settings. For example, identification of a true comparison group is particularly challenging given the innumerable concurrent private and public quality improvement, pay-for-performance, and value-based purchasing initiatives in existence during the demonstration period. Our findings may simply mean that our comparison group practices were also improving their care processes and quality of care in general through participation in an uncaptured transformation or improvement initiative. Thus, the steady gains made by the demonstration practices were eclipsed by the simultaneous improvements in the comparison practices.

Limitations and Strengths

Our analyses were limited to a small set of claims-based quality-of-care measures that can be reliably calculated from Medicare and Medicaid claims. Although we were able to focus on some common chronic conditions and a few screening measures important to MAPCP practices, we were unable to provide a more comprehensive picture of quality of care. Richer measurements of quality care, such as intermediate outcomes (eg, A1C results, optimal cholesterol levels, patient blood pressure management), preventive care actions (eg, tobacco screening), and care coordination actions (eg, medication reconciliation), could not be examined here, as these would have required resource- and cost-intensive medical record abstraction. We were also limited to claims-based health outcomes metrics; other metrics such as patient-reported outcomes and functional status measures were not available to us, and patient experience and satisfaction were not available during pre- and postdemonstration periods.

Despite these limitations, our study is unique as a comprehensive combined evaluation of 8 state-led all-payer PCMH initiatives, reporting the impact of this all-payer model on quality of care for Medicare and Medicaid samples. Previous studies have largely focused on single-payer initiatives13-16 or on a single state’s multipayer initiative.17 Our findings, albeit inconclusive, contribute to the growing literature on multipayer medical home initiatives on the healthcare reform landscape, including one that examined variations across the states’ practices.18,19

CONCLUSIONS

The MAPCP Demonstration was an early CMS initiative to incentivize coordination of care within practices across multiple payers. Although evidence is mixed on the PCMH model’s ability to reduce unnecessary utilization and expenditures while improving the quality of care,18 the model is becoming entrenched in US healthcare. Since the MAPCP Demonstration, CMS has introduced other care-improvement, PCMH, and all-payer models, including the Transforming Clinical Practice Initiative, the Comprehensive Primary Care Plus Model, and the Vermont All-Payer Accountable Care Organization Model.20 Knowing what processes MAPCP practices opted to put in place to improve quality has informed CMS’ thinking on how to structure supports to help participants in these newer models. Lessons and limitations learned from the MAPCP Demonstration can also help set expectations for policy makers and payers around which quality measures may be more actionable for providers and most relevant for payers in monitoring provider progress in a new demonstration.

Acknowledgments

The authors would like to thank Dr Suzanne Wensky in the Center for Medicare and Medicaid Innovation for her guidance and feedback throughout this project and the development of the manuscript.

Other members on the MAPCP Evaluation Team are Donald Nichols*, Susan Haber, Joshua M. Wiener, Kevin Smith, Nathan West, Asta Sorensen*, Kathleen Farrell, Leila Kahwati, Jerry Cromwell*, Pamela Spain, Noëlle Richa Siegfried, Amy Kandilov, Vincent Keyes, Will Parish, Ann Larsen, Carol Urato*, Ellen Wilson*, Lisa Lines, Stephanie Kissam, Rebecca Perry, Patrick Edwards, Shellery Ebron, Mark Graber*, Yiyan (Echo) Liu, Benjamin Koethe*, Jenna Brophy, Andrew Kueffer*, Amy Mills, Lindsay Morris*, Rebecca Lewis, Sarah Arnold, Sophia Kwon, Konny Kim*, Heather Beil, Kent Parks, Rose Feinberg, Timothy O’Brien, Matt Urato*, Alon Evron*, Elise Hooper, Huiling Pan, Laxminarayana Ganapathi, Brendan DeCenso*, Martijn Van Hasselt*, Nancy McCall*, Stephen Zuckerman, Nicole Cafarella Lallemand, Rachel Burton**, Rebecca Peters, Robert Berenson, Kelly Devers**, Kathy Witgert, Neva Kaye, Diane Justice, Barbara Wirth, Charles Townley, Rachel Yalowich, and Mary Takach***.

*Formerly with RTI International.

**Formerly with the Urban Institute.

***Formerly with the National Academy for State Health Policy.Author Affiliations: RTI International (ML, CB, MR, MG), Research Triangle Park, NC.

Source of Funding: CMS contract No. HHSM-500-2010-00021I, task order No. HHSM-500-T0005.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (ML, CB, MR); acquisition of data (MR); analysis and interpretation of data (ML, CB, MR, MG); drafting of the manuscript (ML, CB, MR, MG); critical revision of the manuscript for important intellectual content (ML, CB, MR, MG); and statistical analysis (CB, MR).

Address Correspondence to: Musetta Leung, PhD, RTI International, 307 Waverley Oaks Rd, Ste 101, Waltham, MA 02452. Email: mleung@rti.org.REFERENCES

1. Defining the PCMH. Agency for Healthcare Quality and Research website. pcmh.ahrq.gov/page/defining-pcmh. Accessed August 6, 2019.

2. Landon BE, Gill JM, Antonelli RC, Rich EC. Prospects for rebuilding primary care using the patient-centered medical home. Health Aff (Millwood). 2010;29(5):827-834. doi: 10.1377/hlthaff.2010.0016.

3. Rosenthal TC. The medical home: growing evidence to support a new approach to primary care. J Am Board Fam Med. 2008;21(5):427-440. doi: 10.3122/jabfm.2008.05.070287.

4. Scholle SH, Saunders RC, Tirodkar MA, Torda P, Pawlson LG. Patient-centered medical homes in the United States. J Ambul Care Manage. 2011;34(1):20-32. doi: 10.1097/JAC.0b013e3181ff7080.

5. Bleser WK, Miller-Day M, Naughton D, Bricker PL, Cronholm PF, Gabbay RA. Strategies for achieving whole-practice engagement and buy-in to the patient-centered medical home. Ann Fam Med. 2014;12(1):37-45. doi: 10.1370/afm.1564.

6. Maeng DD, Graf TR, Davis DE, Tomcavage J, Bloom FJ Jr. Can a patient-centered medical home lead to better patient outcomes? the quality implications of Geisinger’s ProvenHealth Navigator. Am J Med Qual. 2012;27(3):210-216. doi: 10.1177/1062860611417421.

7. Maeng DD, Graham J, Graf TR, et al. Reducing long-term cost by transforming primary care: evidence from Geisinger’s medical home model. Am J Manag Care. 2012;18(3):149-155.

8. Multi-Payer Advanced Primary Care Practice. CMS website. innovation.cms.gov/initiatives/Multi-payer-Advanced-Primary-Care-Practice. Accessed February 12, 2019.

9. Multi-payer Advanced Primary Care Practice (MAPCP) Demonstration fact sheet. CMS website. innovation.cms.gov/Files/fact-sheet/mapcpdemo-Fact-Sheet.pdf. Accessed April 5, 2012.

10. Hainmueller J. Entropy balancing for causal effects: a multivariate reweighting method to produce balanced samples in observational studies. Polit Anal. 2012;20(1):25-46. doi: 10.1093/pan/mpr025.

11. HEDIS 2013 technical specifications for physician measurement. National Committee for Quality Assurance website. ncqa.org/hedis/measures. Accessed April 22, 2014.

12. Peikes D, Zutshi A, Genevro JL, Parchman ML, Meyers DS. Early evaluations of the medical home: building on a promising start. Am J Manag Care. 2012;18(2):105-116.

13. DeVries A, Li CHW, Sridhar G, Hummel JR, Breidbart S, Barron JJ. Impact of medical homes on quality, healthcare utilization, and costs. Am J Manag Care. 2012;18(9):534-544.

14. David G, Gunnarsson C, Saynisch PA, Chawla R, Nigam S. Do patient-centered medical homes reduce emergency department visits? Health Serv Res. 2015;50(2):418-439. doi: 10.1111/1475-6773.12218.

15. Kahn KL, Timbie JW, Friedberg MW, et al. Evaluation of CMS’s federally qualified health center (FQHC) advanced primary care practice (APCP) demonstration: final second annual report. RAND Corporation website. rand.org/pubs/research_reports/RR886z1.html. Published July 2015. Accessed August 6, 2019.

16. Dale SB, Ghosh A, Peikes DN, et al. Two-year costs and quality in the Comprehensive Primary Care Initiative. N Engl J Med. 2016;374(24):2345-2356. doi: 10.1056/NEJMsa1414953.

17. Rosenthal MB, Alidina S, Friedberg MW, et al. A difference-in-difference analysis of changes in quality, utilization and cost following the Colorado multi-payer patient-centered medical home pilot. J Gen Intern Med. 2016;31(3):289-296. doi: 10.1007/s11606-015-3521-1.

18. Nichols DE, Haber SG, Romaire MA, Wensky SG; Multi-Payer Advanced Primary Care Practice Evaluation Team. Changes in utilization and expenditures for Medicare beneficiaries in patient-centered medical homes: findings from the Multi-Payer Advanced Primary Care Practice Demonstration. Med Care. 2018;56(9):775-783. doi: 10.1097/MLR.0000000000000966.

19. Burton RA, Lallemand NM, Peters RA, Zuckerman S; MAPCP Demonstration Evaluation Team. Characteristics of patient-centered medical home initiatives that generated savings for Medicare: a qualitative multi-case analysis. J Gen Intern Med. 2018;33(7):1028-1034. doi: 10.1007/s11606-018-4309-x.

20. Innovation models. CMS website. innovation.cms.gov/initiatives/index.html#views=models. Accessed August 6, 2019.