Examining Differential Performance of 3 Medical Home Recognition Programs

July 19, 2018
Ammarah Mahmud, MPH
Ammarah Mahmud, MPH

,
Justin W. Timbie, PhD
Justin W. Timbie, PhD

,
Rosalie Malsberger, MS
Rosalie Malsberger, MS

,
Claude M. Setodji, PhD
Claude M. Setodji, PhD

,
Amii Kress, PhD
Amii Kress, PhD

,
Liisa Hiatt, MS
Liisa Hiatt, MS

,
Peter Mendel, PhD
Peter Mendel, PhD

,
Katherine L. Kahn, MD
Katherine L. Kahn, MD

Volume 24, Issue 7

We examine utilization, quality, and expenditures among Medicare beneficiaries receiving care at federally qualified health centers and compare outcomes among those attributed to 1 of 3 recognition programs versus none.

ABSTRACT

Objectives: We examined differences in patient outcomes associated with 3 patient-centered medical home (PCMH) recognition programs—National Committee for Quality Assurance (NCQA) Level 3, The Joint Commission (TJC), and Accreditation Association for Ambulatory Health Care (AAAHC)—among Medicare beneficiaries receiving care at federally qualified health centers (FQHCs).

Study Design: We used data from CMS’ FQHC Advanced Primary Care Practice Demonstration, in which participating FQHCs received assistance to achieve NCQA Level 3 PCMH recognition. We assessed the impact of the 3 recognition programs on utilization, quality, and Medicare expenditures using a sample of 1108 demonstration and comparison FQHCs.

Methods: Using propensity-weighted difference-in-differences analyses, we compared changes in outcomes over 3 years for beneficiaries attributed to FQHCs that achieved each type of recognition relative to beneficiaries attributed to FQHCs that did not achieve recognition.

Results: Recognized FQHCs, compared with nonrecognized FQHCs, were associated with significant 3-year changes in FQHC visits, non-FQHC primary care visits, specialty visits, emergency department (ED) visits, hospitalizations, a composite diabetes process measure, and Medicare expenditures. Changes varied in direction and strength by recognition type. In year 3, compared with nonrecognized sites, NCQA Level 3 sites were associated with greater increases in ambulatory visits and quality and greater reductions in hospitalizations and expenditures (P <.01), TJC sites were associated with significant reductions in ED visits and hospitalizations (P <.01), and AAAHC sites had changes in the opposite direction of what we anticipated.

Conclusions: Heterogeneous changes in beneficiary utilization, quality, and expenditures by recognition type may be explained by differences in recognition criteria, evaluation processes, and documentation requirements.

Am J Manag Care. 2018;24(7):334-340Takeaway Points

This study contributes to the literature assessing the effectiveness of medical home (MH) recognition programs on patient outcomes:

  • This analysis is the first to compare beneficiary outcomes across multiple MH recognition programs.
  • We document heterogeneity in the association between achievement of MH recognition and beneficiary outcomes among 3 MH recognition programs.
  • These study findings support the need to better understand how different components of MH recognition programs contribute to site-level changes and patient outcomes.

Many medical practices are pursuing primary care transformation using a medical home (MH) model.1,2 These models incorporate the joint principles of the patient-centered medical home (PCMH), which describe features of a strong primary care delivery system, including enhanced access, coordinated and comprehensive care, and continuous quality improvement.3-5 With time, additional administrative and financial burdens associated with primary care transformation have prompted a need to distinguish practices that systematically adhered to MH principles from those that did not.3,6-8

Over time, payers have encouraged practices to pursue MH recognition in order to codify practices’ use of PCMH principles.9,10 As of 2017, 3 organizations offer 3 common forms of MH recognition: National Committee for Quality Assurance (NCQA; 12,000 practice sites), the Accreditation Association for Ambulatory Health Care (AAAHC; 6000 practice sites), and The Joint Commission (TJC; 1400 practice sites).11-13

Despite the rapid growth of these programs, limited evidence exists on the relative effectiveness of different recognition types on patient outcomes.14-18 Although each program’s recognition standards align with core elements of the MH model, required elements vary for each program and may contribute to differences in outcomes. For example, recognition programs have differing criteria related to use of health information technology (IT), care coordination, and medication management. Additionally, each program assigns a different weight to these areas when measuring the extent of implementation of each program’s requirements.

Although descriptive work has compared recognition programs’ application procedures and recognition standards, no analyses have examined patient outcomes associated with these programs.19-23 Using data from a nationwide evaluation of an initiative to assist federally qualified health centers (FQHCs) in becoming PCMHs, we examine changes in utilization, quality, and Medicare expenditures associated with the terminal (ie, highest) MH recognition status of 3 recognition programs: NCQA 2011 Level 3 PCMH, TJC Primary Care MH, and AAAHC MH. As NCQA 2011 Level 1 and Level 2 PCMH are not terminal recognition levels, they are not included in this analysis. We hypothesized that changes in visits to FQHCs, primary care outside of FQHCs, specialists, and emergency departments (EDs); inpatient admissions; quality of care; and expenditures would vary by MH recognition type.

METHODS

MH Recognition Programs

The 3 recognition programs examined differ in the content and specificity of their criteria, evaluation processes, and documentation requirements. We reviewed descriptive studies comparing these programs, as well as their online resources, to provide context for our analysis.22-26

NCQA has a practice site—level recognition program, meaning that sites affiliated with the same medical practice must individually pursue recognition. Practices pursuing recognition may achieve Level 1, 2, or 3 recognition based on the extent to which they meet specific standards.27 NCQA emphasizes meaningful use requirements; almost half of its recognition score is derived from IT capabilities.

In contrast, TJC recognition is awarded at the organizational level, meaning that a single recognition award is given to the organization spanning all of its practice sites. Practices seeking MH recognition must also meet criteria for TJC’s ambulatory care accreditation, which requires sites to implement foundational components for MH content areas; these do not disproportionately emphasize health IT capabilities. Uniquely, TJC requires sites to collect data on patient experience, which is not required for NCQA recognition.19,28

Similar to TJC, AAAHC operates an organizational-level program allowing practices to pursue MH recognition in addition to a foundational ambulatory care certification.21 Burton et al found that a large proportion of AAAHC recognition criteria require practices to develop policies related to patient care, staff development, and prevention practices. However, limited guidance is provided about the specification of these policies, which allows variance in site-level interpretation and implementation.19 Almost a quarter of the AAAHC recognition score derives from the development of these policies.

Study Sample

The site and beneficiary sample used in this analysis was drawn from our evaluation of CMS’ FQHC Advanced Primary Care Practice (APCP) Demonstration.29-32 Demonstration sites received financial support and technical assistance to achieve NCQA Level 3 recognition within a 3-year period. Independent of this demonstration, both demonstration and comparison sites utilized other technical and financial resources through the Health Resources & Services Administration and CMS during the same period.30 Details of this demonstration’s evaluation methodology have been published and are supplemented in the eAppendix (available at ajmc.com).29,30

We used Medicare claims data from 2010 to 2014 to attribute beneficiaries to FQHCs based on the plurality of their primary care visits during the year before the demonstration. The attribution process was repeated in each subsequent year for beneficiaries who utilized primary care in the years following the start of the demonstration. This analysis used data on beneficiaries attributed to 1 of 1108 of the evaluation’s 1330 FQHCs. We included 393 FQHCs with NCQA Level 3 (n = 232,990 beneficiaries), 100 FQHCs with TJC (n = 55,471), and 14 FQHCs with AAAHC (n = 14,020) recognition, and a group of 601 FQHCs that did not achieve recognition (n = 302,480) by the end of 2014 (Figure). Each of the 4 groups included demonstration and comparison sites under the FQHC APCP Demonstration (Table 1). This analysis focuses on outcomes of beneficiaries attributed to 3 terminal recognition programs as distinct from foundational programs (eg, NCQA PCMH Level 1 and 2, TJC accreditation, and AAAHC certification). The remaining 222 of 1330 sites were excluded because they received a foundational recognition (n = 147), achieved multiple recognition types (n = 59), or were recognized by a program other than the 3 considered in this analysis (n = 16).29

Description of Outcomes

Utilization measures included FQHC visits, non-FQHC primary care visits (ie, visits to rural health clinics or physician offices but not FQHCs), and specialist visits. In each category, the visit could be to a physician, nurse practitioner, or physician assistant. Also included were ED visits, inpatient admissions, admissions for ambulatory care—sensitive conditions (ACSCs), and 30-day unplanned readmissions. We used 1 composite and 4 diabetes process quality measures (glycated hemoglobin [A1C] testing, low-density lipoprotein cholesterol testing, retinal eye exams, and nephropathy testing within the past year) and a measure of annual lipid tests for patients with ischemic vascular disease. We assessed effects on Medicare expenditures in 3 categories: inpatient, Part B (physician and supplier), and total Medicare expenditures (consisting of inpatient, outpatient, skilled nursing facility, home health, hospice, durable medical equipment, and Part B expenditures).

Statistical Analysis

We used difference-in-differences (DID) analyses to compare changes in outcomes from baseline to the end of each demonstration year for beneficiaries attributed to NCQA Level 3—, TJC-, or AAAHC-recognized FQHCs compared with beneficiaries attributed to nonrecognized sites. A positive DID estimate can result from either a greater increase or smaller decrease in outcome measures for recognized versus nonrecognized sites. We used 2-part models to estimate effects on utilization and Medicare expenditures, with the first part using logistic regression and the second part using either negative binomial (utilization) or linear models (expenditures).33,34

Readmissions and process measures were modeled using logistic regression. We used multinomial logistic regression to create propensity score weights reflecting each beneficiary’s propensity to be attributed to a site that achieved each recognition based on beneficiary and site-level characteristics measured the year prior to each beneficiary’s attribution (eAppendix). To provide context for the DID estimates, we present information on trends occurring between the baseline year and year 3 for all 4 groups. Lastly, we analyzed trends among groups for 2 years prior to the study to confirm DID model assumptions (eAppendix).

RESULTS

Sites associated with the 4 recognition outcomes were similar with respect to the demographics of patients they serve (Table 2). The majority of FQHC users were female; half were disabled. When using propensity score weights to balance baseline characteristics across the 4 groups, we found only a few remaining differences: Beneficiaries attributed to AAAHC-recognized sites were more likely to be white and less likely to be black than those attributed to nonrecognized sites; beneficiaries attributed to TJC-recognized sites were more likely to live in metropolitan areas and less likely to live in rural areas than those attributed to nonrecognized sites; beneficiaries attributed to AAAHC-recognized sites were less likely to live in metropolitan or rural areas, more likely to live in urban areas, and less likely to live in high-poverty areas relative to those attributed to nonrecognized sites; AAAHC sites were likely to have fewer specialists relative to sites without recognition; and AAAHC sites had approximately 3 times the average number of Medicare beneficiaries as nonrecognized sites.

Several patterns in outcome measures emerged from baseline to the end of each demonstration year for beneficiaries attributed to recognized sites relative to nonrecognized sites. The first 4 columns in Table 3 show the direction of 3-year trends for each outcome measure for beneficiaries associated with each group. The remainder of Table 3 shows DID estimates and P values for analyses comparing sites with each recognition type with nonrecognized sites.

For FQHC visits, the first 4 columns show a statistically significant decrease across all 4 groups. The remaining columns show a net increase of 88, 172, and 180 more visits per 1000 beneficiaries for years 1, 2, and 3, respectively (P <.001). These estimates reflect a smaller reduction in FQHC visits among NCQA Level 3 sites compared with nonrecognized sites. For TJC- and AAAHC-recognized sites, compared with nonrecognized sites, all significant estimates are negative. This reflects a larger reduction among TJC- and AAAHC-recognized sites relative to nonrecognized sites.

All 4 groups were associated with an increase in non-FQHC primary care visits, but a smaller rate of increase for NCQA Level 3 sites and a larger rate of increase for AAAHC sites led to a relative reduction of 90 visits and a relative increase of 166 visits per 1000 beneficiaries in year 3, respectively, compared with nonrecognized sites (P <.01). Specialist visits similarly increased over time for all 4 groups, but only NCQA Level 3 sites were associated with a relative decrease over time (45 fewer visits per 1000 beneficiaries in year 3) relative to nonrecognized sites (P <.01).

For nonambulatory utilization measures, only TJC sites achieved greater reductions in ED visits than nonrecognized sites (39 fewer ED visits per 1000 beneficiaries; P <.01). By contrast, AAAHC sites were associated with an increase of 78 visits per 1000 beneficiaries in year 3 relative to nonrecognized sites (P <.01). This was due to a smaller reduction in ED visits compared with nonrecognized sites, rather than an increase in ED utilization over time. Although inpatient admissions increased in all 4 groups, NCQA Level 3 and TJC sites had smaller increases (11 and 18 fewer admissions per 1000 beneficiaries in year 3, respectively; P <.001). Recognized sites were not associated with reductions in admissions for ACSCs or readmissions.

Few clear trends emerged among process measures. Both NCQA Level 3 and AAAHC sites were associated with larger increases over time in the diabetes composite measure (1.6 and 2.7 percentage points, respectively, in year 3) relative to nonrecognized sites (P <.05). However, for the individual diabetes measures, only NCQA Level 3 sites were associated with relative increases in A1C tests (1.0 percentage points), eye exams (1.4 percentage points), and nephropathy tests (2.1 percentage points). These were largely driven by smaller decreases in rates over time relative to nonrecognized sites.

Medicare expenditures increased over time for all 4 groups in each of the 3 categories we examined. However, only NCQA Level 3 sites had a smaller rate of increase than nonrecognized sites, leading to a relative decrease in total Medicare ($300 per beneficiary), inpatient ($214 per beneficiary), and Part B ($72 per beneficiary) expenditures in year 3 (P <.01).

DISCUSSION

We compared trends in utilization, quality, and expenditures over 3 years for Medicare beneficiaries attributed to sites that received 1 of 3 types of MH recognition relative to sites with no recognition. The former were more likely to demonstrate changes in outcomes consistent with better access and quality of care, lower inpatient utilization, and lower expenditures in year 3. However, the direction and strength of these associations varied by recognition type.

Each recognition program was associated with a distinct pattern of relative change in FQHC utilization rates: a net increase, net decrease, or no difference relative to trends among nonrecognized sites. Relative changes in non-FQHC primary care visit rates were in the opposite direction to those of FQHC visits, suggesting that beneficiaries are exhibiting greater loyalty to FQHCs. Only NCQA Level 3—recognized sites achieved significant relative reductions in specialty visits by year 3, which may reflect reduced need due to the consolidation of primary care at these recognized FQHCs or implementation of more efficient referral systems.

Among non—ambulatory care utilization measures, NCQA- and TJC-recognized sites were associated with reductions in all-cause inpatient admissions. These changes are unlikely to be driven by trends in hospitalizations for ACSCs, where we observed no significant differences between recognized and nonrecognized sites. Instead, these reductions in rates of all-cause admissions may be related to improved primary care utilization or enhanced coordination within FQHCs or between FQHCs and outside specialists, which may lead to better management of patients with complex conditions or lower rates of specialty referrals. These factors might also explain TJC-recognized sites’ larger reductions in ED utilization over time. Large relative increases in ED utilization among AAAHC-recognized sites, along with reductions in FQHC visits and increases in non-FQHC visits, raise some concern about unintended consequences of MH implementation in these sites. However, a small number of AAAHC sites were included in this study, and with a future larger study, current null or small differences might become statistically significant.

Across all recognition programs, quality of care was more likely to improve relative to trends among nonrecognized sites. In year 3, we observed many statistically significant improvements for NCQA Level 3 sites and in the composite diabetes measure for AAAHC sites. Small sample sizes for TJC- and AAAHC-recognized sites may limit the ability to detect statistically significant associations and may explain results for Medicare expenditures, where we observed statistically significant changes only for NCQA Level 3 sites.

Some of the observed heterogeneity in associations with patient outcomes may be due to the specific recognition program criteria, which vary with respect to their content and specificity, as well as the evaluation process and required documentation. One major difference related to the content and specificity of criteria is that NCQA places a heavier emphasis on the use of electronic health records (EHRs) that incorporate meaningful use criteria.19,35 Requirements that focus on the utilization of EHRs can enhance primary care delivery and strengthen population health management. In comparison, TJC scoring criteria are weighted fairly consistently across multiple domains, rather than heavily emphasizing any one content area.19 Sites pursuing AAAHC recognition have to implement internal policies, but they receive limited guidance on the specificity of these policies. This may lead sites to implement practice changes that are not associated with patient outcomes measured in this analysis. Nevertheless, there may be unmeasured beneficial or adverse effects of these changes at the clinic or patient level.

Additionally, differences in the evaluation process and the associated documentation requirements may contribute to heterogeneity in patient outcomes. For example, sites pursuing NCQA recognition are responsible for completing worksheets, responding to follow-up questions, and providing documentation, a process that takes approximately 40 to 80 hours to complete and upload online.19 In comparison, TJC and AAAHC use external surveyors to perform on-site evaluations without requiring additional time to upload documentation materials. Sites pursuing AAAHC recognition provide supplemental documentation to the external surveyors for recognition, but sites seeking TJC recognition are only required to provide documentation to receive ambulatory care accreditation, a prerequisite for MH recognition. These required tasks add an administrative burden to staff who also are involved in the implementation of PCMH strategies and daily operational activities.

Limitations

This analysis had several limitations. First, the small number of sites achieving AAAHC recognition limited our ability to identify statistically significant associations with many outcome measures. Second, we were unable to randomize sites to recognition programs. Accordingly, we used DID analyses to control for patient- and site-level differences. Nevertheless, it is possible that unmeasured confounders could bias the effects of recognition types, including self-selection of sites to recognition programs. For example, heterogeneity in the criteria and evaluation process among recognition programs, and how those align with a site’s existing practice structure, may influence sites’ decisions to pursue one recognition type over another. Sites’ self-selection to recognition programs may explain differences by program if these factors are correlated with beneficiary outcomes.

Third, a large proportion of sites achieving NCQA Level 3 recognition participated in the CMS FQHC APCP Demonstration and received support to pursue NCQA recognition. Despite our DID design, including examination of parallel trends (eAppendix), this group may differ from sites receiving other types of recognition. This may explain the strong associations with many outcomes for NCQA-recognized sites. To estimate the effect of recognition on outcomes independent of the effect of the demonstration, we conducted a sensitivity analysis replicating these with 687 comparison sites. We found fewer statistically significant results, although they were consistent with outcomes reported in this analysis (eAppendix).

Fourth, we measured associations between MH recognition and beneficiary outcomes over a 3-year period, but many sites obtained recognition toward the end of the study period. A longer duration of follow-up after achieving recognition may be needed to observe effects on outcomes.36 Fifth, our analyses were limited to claims-based outcomes as our sample sizes for patient experience measures were inadequate to support comparisons across the 3 recognition groups. Sixth, we achieved high levels of balance by using propensity score weights, but it is possible that unobserved differences among groups could bias the results if these unobserved factors changed over time. Lastly, we examined the effects of these recognition programs among FQHCs but do not know the extent to which these findings may generalize to non—safety-net clinics. Nevertheless, our findings suggest value in additional examination of how specific components of recognition programs affect patient outcomes across settings.

Despite these limitations, this analysis is the first to compare beneficiary outcomes across recognition programs, document heterogeneity in the association among recognition programs with beneficiary outcomes, and provide insight into the reasons for these differences. We note that our analyses pertain to recognition programs at a particular point in time, and all 3 programs have evolved over time.37-39 Given the extensive resources required by organizations to support the development and maintenance of recognition programs, and by sites to use them, this empirical evidence may help future studies focus on how changes in recognition criteria, evaluation processes, or documentation requirements may additionally affect patient outcomes.

CONCLUSIONS

As practices continue to pursue MH recognition, it is important to understand how components of each recognition program influence the types of changes that sites pursue and the degree to which they translate to changes in patient outcomes. NCQA Level 3 sites were associated with improvements across a greater number of outcomes, and there were fewer improved outcomes among sites with TJC recognition over a 3-year period. AAAHC-recognized sites were associated with limited improvements, primarily due to a small sample size. Although this analysis of variation in outcomes contributes to the literature on the effectiveness of recognition programs, there is a need for additional comparative analyses of types and components of recognition programs to better understand the associations among specific program requirements and patient outcomes.Author Affiliations: RAND Corporation (AM, JWT, RM, CMS, AK, LH, PM, KLK), RAND Corporation, Arlington, VA; David Geffen School of Medicine at UCLA (KLK), Los Angeles, CA.

Source of Funding: Funding was provided by CMS (contract: HHSM-500-2005-00028I, task #T0008.) The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of HHS or any of its agencies.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (JWT, CMS, AK, LH, PM, KLK); acquisition of data (AK, LH, KLK); analysis and interpretation of data (AM, JWT, RM, CMS, LH, PM, KLK); drafting of the manuscript (AM, JWT, KLK); critical revision of the manuscript for important intellectual content (AM, JWT, CMS, AK, PM, KLK); statistical analysis (JWT, RM, CMS, KLK); obtaining funding (KLK); administrative, technical, or logistic support (AM, RM, KLK); and supervision (JWT, KLK).

Address Correspondence to: Ammarah Mahmud, MPH, RAND Corporation, 1200 S Hayes St, Arlington, VA 22202. Email: amahmud@rand.org.REFERENCES

1. Bitton A, Martin C, Landon BE. A nationwide survey of patient centered medical home demonstration projects. J Gen Intern Med. 2010;25(6):584-592. doi: 10.1007/s11606-010-1262-8.

2. Rosenthal TC. The medical home: growing evidence to support a new approach to primary care. J Am Board Fam Med. 2008;21(5):427-440. doi: 10.3122/jabfm.2008.05.070287.

3. Mathematica Policy Research. Making medical homes work: moving concept to practice. Center for Studying Health System Change website. hschange.org/CONTENT/1030/1030.pdf. Published December 2008. Accessed March 31, 2017.

4. Starfield B, Shi L, Macinko J. Contribution of primary care to health systems and health. Milbank Q. 2005;83(3):457-502. doi: 10.1111/j.1468-0009.2005.00409.x.

5. American Academy of Family Physicians, American Academy of Pediatrics, American College of Physicians, American Osteopathic Association. Joint principles of the patient-centered medical home. American Academy of Family Physicians website. aafp.org/dam/AAFP/documents/practice_management/pcmh/initiatives/PCMHJoint.pdf. Published March 7, 2007. Accessed September 5, 2016.

6. Vest JR, Bolin JN, Miller TR, Gamm LD, Siegrist TE, Martinez LE. Medical homes: “where you stand on definitions depends on where you sit.” Med Care Res Rev. 2010;67(4):393-411. doi: 10.1177/1077558710367794.

7. Halladay JR, Stearns SC, Wroth T, et al. Cost to primary care practices of responding to payer requests for quality and performance data. Ann Fam Med. 2009;7(6):495-503. doi: 10.1370/afm.1050.

8. West DR, Radcliff TA, Brown T, Cote MJ, Smith PC, Dickinson WP. Costs associated with data collection and reporting for diabetes quality improvement in primary care practices: a report from SNOCAP-USA. J Am Board Fam Med. 2012;25(3):275-282. doi: 10.3122/jabfm.2012.03.110049.

9. Edwards ST, Bitton A, Hong J, Landon BE. Patient-centered medical home initiatives expanded in 2009-13: providers, patients, and payment incentives increased. Health Aff (Millwood). 2014;33(10):1823-1831. doi: 10.1377/hlthaff.2014.0351.

10. Bailit M, Phillips K, Long A. Paying for the medical home: payment models to support patient-centered medical home transformation in the safety net. Safety Net Medical Home Initiative website. safetynetmedicalhome.org/sites/default/files/Policy-Brief-1.pdf. Published October 2010. Accessed April 30, 2017.

11. AAAHC to accredit more than 70 correctional health care units nationwide [news release]. Skokie, IL: Accreditation Association for Ambulatory Health Care, Inc; March 16, 2017. aaahc.org/Global/New%20items-Press%20releases/Federal%20Bureau%20of%20Prisons%20press%20release%20FINAL.pdf. Accessed May 31, 2017.

12. 192 Joint Commission ambulatory care accredited orgs (1418 sites*) with primary care medical home (PCMH) certification (by state) as of 1/1/2017. The Joint Commission website. jointcommission.org/assets/1/18/Updated_PCMH_list_of_orgs_1-1-17.pdf. Published January 1, 2017. Accessed May 31, 2017.

13. Practices. National Committee for Quality Assurance website. reportcards.ncqa.org/#/practices/list?recognition=Patient-Centered%20Medical%20Home. Accessed May 31, 2017.

14. Peikes D, Zutshi A, Genevro J, Parchman ML, Meyers DS. Early evaluations of the medical home: building on a promising start. Am J Manag Care. 2012;18(2):105-116.

15. Hoff T, Weller W, DePuccio M. The patient-centered medical home: a review of recent research. Med Care Res Rev. 2012;69(6):619-644. doi: 10.1177/1077558712447688.

16. Cole ES, Campbell C, Diana ML, Webber L, Culbertson R. Patient-centered medical homes in Louisiana had minimal impact on Medicaid population’s use of acute care and costs. Health Aff (Millwood). 2015;34(1):87-94. doi: 10.1377/hlthaff.2014.0582.

17. van Hasselt M, McCall N, Keyes V, Wensky SG, Smith KW. Total cost of care lower among Medicare fee-for-service beneficiaries receiving care from patient-centered medical homes. Health Serv Res. 2015;50(1):253-272. doi: 10.1111/1475-6773.12217.

18. Morgan TO, Everett DL, Dunlop AL. How do interventions that exemplify the joint principles of the patient centered medical home affect hemoglobin A1C in patients with diabetes: a review. Health Serv Res Manag Epidemiol. 2014;1: 2333392814556153. doi: 10.1177/2333392814556153.

19. Burton RA, Devers KJ, Berenson RA. Patient-centered medical home recognition tools: a comparison of ten surveys’ content and operational details. Urban Institute website. urban.org/research/publication/patient-centered-medical-home-recognition-tools-comparison-ten-surveys-content-and-operational-details. Published March 1, 2012. Accessed September 30, 2016.

20. Shi L, Lee DC, Chung M, Liang H, Lock D, Sripipatana A. Patient-centered medical home recognition and clinical performance in U.S. community health centers. Health Serv Res. 2017;52(3):984-1004. doi: 10.1111/1475-6773.12523.

21. Patient-centered medical home resources: comparison chart. Health Resources and Services Administration website. bphc.hrsa.gov/qualityimprovement/clinicalquality/accreditation-pcmh/pcmhrecognition.pdf. Updated February 19, 2015. Accessed September 30, 2016.

22. Comparison of NCQA 2014 medical home recognition to 2014 Joint Commission primary care medical home certification for ambulatory care organizations. The Joint Commission website. jointcommission.org/assets/1/18/PCMH_cross_ncqa.pdf. Published April 2015. Accessed October 31, 2016.

23. Eligibility for certification for palliative care. The Joint Commission website. jointcommission.org/certification/eligiblity_palliative_care.aspx. Accessed October 31, 2016.

24. Accreditation Association for Ambulatory Health Care, Inc. Medical Home On-Site Certification Handbook. Skokie, IL: Accreditation Association for Ambulatory Health Care, Inc; 2011. aaahc.org/Global/Handbooks/AAAHC_MH%20OSC%20HB13_Final.pdf. Accessed April 30, 2017.

25. Network accreditation: designed for health care organizations that own and operate multiple sites. Accreditation Association for Ambulatory Health Care, Inc website. aaahc.org/en/accreditation/Network_Accreditation_Program. Accessed February 28, 2017.

26. Medical Group Management Association. The Patient Centered Medical Home Guidelines: A Tool to Compare National Programs. Englewood, CO: Medical Group Management Association; 2011. csimt.gov/wp-content/uploads/MGMA-PCMH-Guidelines_Tool-to-Compare-National-Programs.pdf. Accessed April 30, 2017.

27. NCQA’s Patient Centered Medical Home (PCMH) 2011. National Committee for Quality Assurance website. ncqa.org/Portals/0/newsroom/PCMH%202011%20Overview%20White%20Paper%20Copyright.pdf. Published January 31, 2011. Accessed March 31, 2017.

28. National Committee for Quality Assurance. Patient-Centered Medical Home (PCMH) 2011 frequently asked questions: PCMH 6B: measure patient/family experience. Texas Health Care Regional Partnership 6 website. texasrhp6.com/wp-content/uploads/2014/02/PCMH-2011-FAQs-9.24.13.pdf. Accessed July 6, 2018.

29. Kahn KL, Timbie JW, Friedberg MW, et al. Evaluation of CMS’s Federally Qualified Health Center (FQHC) Advanced Primary Care Practice (APCP) Demonstration: Final Report. Santa Monica, CA: RAND Corporation; 2017. rand.org/pubs/research_reports/RR886z2.html. Accessed July 31, 2017.

30. Timbie JW, Setodji CM, Kress A, et al. Implementation of medical homes in federally qualified health centers. N Engl J Med. 2017;377(3):246-256. doi: 10.1056/NEJMsa1616041.

31. Kahn KL, Timbie JW, Friedberg MW, et al. Evaluation of CMS’ FQHC APCP Demonstration: Final First Annual Report. Santa Monica, CA: RAND Corporation; 2015. rand.org/pubs/research_reports/RR886.html. Accessed January 31, 2017.

32. Kahn KL, Timbie JW, Friedberg MW, et al. Evaluation of CMS’s Federally Qualified Health Center (FQHC) Advanced Primary Care Practice (APCP) Demonstration: Final Second Annual Report. Santa Monica, CA: RAND Corporation; 2015. rand.org/pubs/research_reports/RR886z1.html. Accessed February 28, 2017.

33. Duan N, Manning WG, Morris CN, Newhouse JP. A comparison of alternative models for the demand for medical care. J Bus Econ Stat. 1983;1(2):115-126.

34. Manning WG, Morris CN, Newhouse JP, et al. A two-part model of the demand for medical care: preliminary results from the health insurance study. In: Proceedings of the World Congress on Health Economics. Leiden, Netherlands: Health, Economics, and Health Economics; September 1980.

35. National Committee for Quality Assurance. Standards and Guidelines for NCQA’s Patient-Centered Medical Home (PCMH) 2011: Appendix 2. Washington DC; 2011. acofp.org/acofpimis/Acofporg/Apps/2014_PCMH_Finals/Tools/1_PCMH_Recognition_2014_Front_Matter.pdf. Accessed July 31, 2017.

36. Maeng DD, Graf TR, Davis DE, Tomcavage J, Bloom FJ Jr. Can a patient-centered medical home lead to better patient outcomes? the quality implications of Geisinger’s ProvenHealth Navigator. Am J Med Qual. 2012;27(3):210-216. doi: 10.1177/1062860611417421.

37. Berkeley L. Joint Commission’s primary care medical home certification option. The Joint Commission website. jointcommission.org/assets/1/18/presentation_ahc_pcmh_joint_commission_preconf1_0315.pdf. Published March 23, 2015. Accessed April 30, 2017.

38. Standards revisions for 2016. Accrediation Association for Ambulatory Health Care, Inc website. aaahc.org/Global/Handbooks/AppendixA_FNL.pdf. Published 2015. Accessed April 30, 2017.

39. Gittlen S. NCQA redesigns patient-centered medical home recognition program. HealthLeaders Media website. healthleadersmedia.com/quality/ncqa-redesigns-patient-centered-medical-home-recognition-program. Published March 22, 2017. Accessed April 30, 2017.