Patient-Centered Medical Homes and Preventive Service Use

The American Journal of Managed CareMay 2019
Volume 25
Issue 5

Preventive service use was better in patients with a usual source of care but little improved by patient-centered medical home status.


Objectives: Despite data suggesting that patient-centered medical homes (PCMHs) improve preventive service use, limited nationally representative evidence exists. This study compared preventive service use between patients with and without a usual source of care (USC) and, of the patients with a USC, between those in practices with and without PCMH status.

Study Design: This study used a cross-sectional study design.

Methods: We constructed general and disease-specific preventive service indicators using the 2015 Medical Expenditure Panel Survey. Preventive service rates were compared between patients reporting a USC versus no USC and between patients whose USC practices were PCMH certified versus not PCMH certified. Unadjusted outcomes were tested using χ2 tests. Multivariable logistic regression was used to test differences between groups, controlling for predisposing, enabling, and need variables.

Results: Using multivariable logistic regression, respondents with a USC reported higher rates of screening for breast cancer (odds ratio [OR], 2.40; 95% CI, 1.81-3.17) and cervical cancer (OR, 1.99; 95% CI, 1.61-2.47) than respondents with no USC. Diabetes respondents with a USC had higher odds of an annual eye exam (OR, 2.05; 95% CI, 1.26-3.33) than respondents with no USC. Diabetes respondents with a USC that was PCMH certified reported higher rates of annual foot screenings (OR, 2.01; 95% CI, 1.31-3.08) and lower rates of annual cholesterol screenings (OR, 0.30; 95% CI, 0.11-0.83) than those with a USC that was not PCMH certified.

Conclusions: Having a USC was associated with higher rates of several preventive screening measures. However, there were fewer significant preventive screening relationships by PCMH status among individuals with a USC. Our results suggest that improving access to a USC may be as important as the application of PCMH principles to a USC practice.

Am J Manag Care. 2019;25(5):e153-e159Takeaway Points

Our study compared preventive service use between patients with and without a usual source of care (USC) and, of the patients with a USC, between those in practices with and without patient-centered medical home (PCMH) status, using the 2015 Medical Expenditure Panel Survey database.

  • Patients with a USC had higher odds of receiving preventive services for cancer, diabetes, and asthma treatment than patients without a USC.
  • Few differences were observed in the use of preventive services among patients with a USC that was certified as a PCMH in comparison with respondents with a USC not certified as a PCMH.

Chronic disease management is a major challenge facing the US healthcare system.1-4 The majority of chronic disease management occurs in the primary care setting, which provides an opportunity for preventive screening and treatment.5 Identifying a provider or place as a usual source of care (USC) can improve preventive service use.6 However, it is suggested that primary care management is best achieved when a USC provider delivers patient-centered care and assists in care coordination across providers.7,8

The patient-centered medical home (PCMH) emphasizes the role of a primary care provider in coordinating care across settings and services.9 Under National Committee for Quality Assurance (NCQA) guidance, PCMH practices agree to adopt 6 key concepts: (1) emphasizing team-based care and practice organization, (2) knowing and managing patients through comprehensive data collection and sharing, (3) patient-centered access and continuity, (4) care management and support, (5) care coordination and care transitions, and (6) performance measurement and quality improvement.10 These key concepts emphasize provider roles and responsibilities under the team-based care model, focus on longitudinal relationships between patients and providers, highlight the delivery of evidence-based screening as measures of performance, and provide data availability to providers to identify gaps in preventive screening. Therefore, the adoption of PCMH principles has the potential to contribute to better preventive care service delivery. A 2013 systematic review of 19 PCMH studies suggests improved patient care experiences and preventive service delivery.11 A more recent systematic review of the PCMH literature in low-income patients showed improvements in clinical outcomes, increases in medication adherence, and lower emergency department (ED) utilization.12

Despite growing evidence of benefit, PCMH studies are primarily conducted as regional demonstrations. A notable exception is the Comprehensive Primary Care Plus program, which includes 2932 practices across 18 regions.13 To our knowledge, only 1 study has examined PCMH benefits across a nationally representative population in the United States.14 However, this study extrapolated the definition of a medical home practice and did not measure PCMH certification status directly. In addition, the majority of PCMH studies compare outcomes between patients enrolled or not enrolled in medical homes within a health system or health plan offering medical home services. This restricts the assessment of the potential benefit that the medical home model might have to patients who do not currently have a USC due to problems accessing insurance, financial constraints, or other burdens. The objective of this study is to compare the quality of preventive services provided to patients with and without an identified USC and to further determine whether USC practices with PCMH certification status improve the receipt of preventive services more than USC practices that are not PCMH certified in a broad representative sample of the US population.


Study Design

This study uses a cross-sectional study design to compare receipt of preventive services in patients with no USC with that of patients with a USC that is either certified or not certified as a PCMH. Data were derived from the 2015 Medical Expenditure Panel Survey (MEPS), a national probability sample of the US civilian noninstitutionalized population.15 This study used the household component, prescribed medicines, and medical condition files, as well as the newly released Medical Organizations Survey (MOS) files, from MEPS. The MOS was first fielded in 2015 to the subset of MEPS respondents reporting a USC.16 The MOS collects information on organizational and financial characteristics of practices that respondents identified as their USC, including a question about the practice’s PCMH certification status.

Population Inclusion/Exclusion

In 2015, MEPS data were collected on 35,427 respondents. We extracted cohorts of patients according to the age, gender, and condition criteria relevant to technical specifications of each quality metric. The sample size for each measure differed according to the measure’s inclusion and exclusion criteria.

Variable Definitions

Primary predictor variables. We used the MEPS variable HAVEUS42, which ascertains whether there is a particular doctor’s office, clinic, health center, or other place to which an individual goes if they are sick or need advice about health, to define USC status. Among respondents with a USC, the MEDHOME variable from the MEPS MOS was used to determine PCMH status.17 Given the contingent questioning process for PCMH status, comparisons are made across 3 cohorts: (1) respondents with no USC, (2) respondents with a USC that is not PCMH certified, and (3) respondents with a USC that is PCMH certified (hereafter referred to as the no-USC, non-PCMH, and PCMH cohorts, respectively).

Outcome variables. Quality indicators were constructed according to guidance from the NCQA Healthcare Effectiveness Data and Information Set (HEDIS)16 or the Pharmacy Quality Alliance (Table 117,18 and eAppendix A [eAppendices available at]).18

Preventive screening measures included receipt of cervical cancer screening over the prior 5 years among women aged 21 to 64 years and breast cancer screening over the prior 2 years among women aged 50 to 74 years. Colon cancer screening in male and female respondents aged 50 to 74 years was derived from MEPS questions pertaining to receipt of colonoscopy, fecal occult blood testing, or sigmoidoscopy. Inappropriate prostate cancer screening was defined for men 70 years or older reporting a prostate-specific antigen test.

We measured mental health follow-up after an emergency mental health encounter (either mental health—specific hospitalization or ED visit) as a visit to any provider or a mental health–specific provider in either the same or the following month. Although the official HEDIS definition measure limits 7- and 30-day follow-up to a mental health specialist only, we chose an inclusive measure of follow-up, given potential uncertainty in self-report of the exact date or type of provider seen.

Patients with coronary heart disease (CHD)—defined as the presence of CHD, angina, a prior myocardial infarction, or other heart disease—were measured for receipt of an annual cholesterol screen, aspirin use for CHD prevention, and statin medication use in the prior year. We used the diabetes care survey to define receipt of an annual foot screen, eye exam, cholesterol screen, glycated hemoglobin screen, and flu vaccination. We also examined the receipt of statins and angiotensin-converting enzyme inhibitors or angiotensin receptor blockers in patients with diabetes aged 40 to 75 years. Finally, 2 asthma-specific indicators were defined from the asthma-specific survey: asthma rescue inhaler overuse (filling 4 or more short-acting β-agonists over a 3-month period) and receipt of 1 or more preventive asthma medications.

Control variables. We used Andersen’s Behavioral Model for Health Services Utilization19 to group variables into predisposing, enabling, and need characteristics. Predisposing characteristics included respondents’ age, gender, income, education, Census region of residence, and race/ethnicity. Enabling characteristics included the type of insurance coverage, categorized as public (eg, Medicare, Medicaid, and Affordable Care Act coverage), private (employer-based), and uninsured. We also measured whether a respondent ever delayed or did not seek care because of cost concerns as an indicator of medical financial hardship. Need variables included indicators for priority medical conditions (depression, hypertension, hyperlipidemia, diabetes, asthma, and CHD) and a measure of comorbidity (sum of MEPS Clinical Classification Codes in 2015). Finally, we defined the presence of a functional limitation that disrupted activities of daily living (ADLs) and an indicator for self-report of poor or fair health.

Statistical Modeling

We present descriptive statistics for continuous variables using mean and SD and categorical variables using counts and percent estimates across the no-USC, non-PCMH, and PCMH cohorts. The MEPS weighting was applied to all statistical comparisons to allow nationally representative comparisons. Outcomes were compared across 2 cohorts. We first compared patients in the no-USC cohort with patients with a USC. The USC group contains patients whose USC practice was either PCMH certified or not. Among patients with a USC, we compared preventive screening across PCMH status. Unadjusted comparisons were made using χ2 testing for categorical variables and t tests or analysis of variance for continuous variables. Finally, across the USC/no-USC cohorts and the PCMH/non-PCMH cohorts, we used multivariate logistic regression to test the significance of comparisons after controlling for the descriptive statistics described in the control variable section. All comparisons were deemed significant at an α of 0.05. All analyses were conducted using STATA-MP version 15.0 (StataCorp, LP; College Station, Texas).


We identified 7506 MEPS respondents with no USC (the no-USC cohort) and 26,512 patients with a USC. Among the USC population, 7974 respondents’ practice sites completed the MOS to assess PCMH status (eAppendix B). Among the patients with a USC, we compared those whose USC practice completed the MOS with those whose USC practice did not complete the MOS (eAppendix C). These results suggest that patients with a USC whose practice completed the MOS were more often female, poor or low-income, white/non-Hispanic, and publicly insured, with higher rates of functional ADL limitations, poorer health, and more comorbid conditions. The USC cohort that completed the MOS included 4644 respondents with a USC that was not PCMH certified (the non-PCMH cohort) and 3330 respondents with a USC certified as a PCMH (the PCMH cohort).

Table 2 describes demographic comparisons between the 7506 patients with no USC and the USC respondents whose practices completed the MOS and were certified as non-PCMH or PCMH. Demographic comparisons between the no-USC cohort and the full USC cohort regardless of PCMH status are available in eAppendix D. Among predisposing variables, the no-USC cohort was younger than the non-PCMH and PCMH cohorts (mean ages, 36.31 vs 40.29 and 38.80 years, respectively; P <.001) and more likely to be male (58% vs 45% and 43%; P <.001), college educated (25% vs 21% and 20%; P <.001), and black/non-Hispanic (14% vs 10% and 11%; P <.001) or Hispanic (24% vs 18% and 17%; P <.001). The no-USC cohort also appeared less likely than the non-PCMH and PCMH cohorts to reside in middle- or upper-income brackets (62% vs 69% and 66%; P <.001). In regard to enabling variables, the no-USC cohort was more likely than the non-PCMH and PCMH groups to be uninsured (21% vs 4% and 3%, respectively; P <.001). Interestingly, despite the lower rates of insurance among the no-USC cohort, no difference existed across populations in the likelihood of delaying or withholding care due to cost. Among the potential need variables, the no-USC cohort had significantly lower mean counts of health conditions (1.82 vs 4.68 and 4.64; P <.001), less reporting of limitations in ADLs (12% vs 27% and 27%; P <.001), and less reporting of fair/poor health (14% vs 22% and 22%; P <.001) in comparison with respondents from the non-PCMH and PCMH cohorts.

Unadjusted comparisons between the no-USC cohort and the full USC cohort regardless of PCMH status are presented in Table 3. Compared with respondents with no USC, respondents with a USC were more likely to receive appropriate colonoscopy (4.7% vs 2.6%; P = .008) and breast cancer (81.5% vs 55.5%; P <.001) and cervical cancer (93.2% vs 85.6%; P <.001) screening, and they had higher rates of inappropriate prostate cancer screening in men 70 years and older (85.7% vs 53.8%; P <.001). After an inpatient or ED-related mental health event, respondents with a USC were more likely than respondents with no USC to see any office-based provider in the month of (63.6% vs 30.3%; P = .008) and the month following (60.3% vs 31.2%; P = .023) the event. The same trends held, but were not significant, when examining follow-up to a mental health provider. Mental health outcomes should be treated as exploratory in this study, given limited sample sizes. Among the cohort of respondents with CHD, no statistical difference existed in outcomes between patients with or without a USC. However, respondents with diabetes and a USC had higher annual rates of foot screening than respondents with diabetes and no USC (71.4% vs 58.4%; P = .008). Finally, comparing respondents with asthma and a USC to respondents with asthma and no USC, we found no difference in rescue inhaler overuse but higher rates of preventive asthma medication (40.0% vs 32.5%; P = .032).

Among respondents with a USC, there were few differences in outcomes across the PCMH and non-PCMH groups (Table 4). Rates of preventive screening for cancer were very similar across groups. Similar rates were also seen for mental health follow-up and the 3 CHD variables. PCMH respondents with diabetes were significantly more likely to report having an annual foot check (77.5% vs 65.7%; P = .004) but were slightly less likely to report having an annual cholesterol screen (93.4% vs 98.0%; P = .0187). No significant differences existed between respondents in the non-PCMH cohort and the PCMH cohort in any of the asthma quality indicators of interest. Again, the ability to measure mental health follow-up in this population was limited, given small sample sizes.

Controlling for predisposing, enabling, and need variables using logistic regression, we continued to see significant differences in cancer screening between patients with and without a USC (Table 5). In patients with a USC, we found higher odds of screening for breast cancer (odds ratio [OR], 2.40; 95% CI, 1.81-3.17; P <.001] and cervical cancer (OR, 1.99; 95% CI, 1.61-2.47; P <.001), as well as higher odds of inappropriate screening for prostate cancer in men 70 years and older (OR, 3.88; 95% CI, 2.05-7.32; P <.001). Similar trends in the diabetes screening indicators were also noted. However, with the exception of higher odds of receiving an annual eye exam for retinopathy (OR, 2.05; 95% CI, 1.26-3.33; P = .004), the odds no longer reached statistical significance. USC respondents with diabetes whose practices were PCMH certified had higher odds of receiving an annual foot check (OR, 2.01; 95% CI, 1.31-3.08; P = .002) but lower odds of having annual cholesterol screening (OR, 0.30; 95% CI, 0.11-0.83; P = .020) than those frequenting practices that were not PCMH certified.


Patients with a USC had higher odds of receiving preventive cancer screening services than patients without a USC, even after controlling for predisposing, enabling, and need variables. Similar relationships were found for preventive service use for asthma treatments and diabetes. However, we found few differences in the odds of using preventive services among patients with a USC recognized as a PCMH compared with those that were not PCMH certified.

It is interesting to note that the population of respondents with a USC in our study had much higher “need” for healthcare services, as reflected in higher rates of chronic conditions such as hypertension, depression, diabetes, asthma, and CHD. USC respondents also had more frequent limitations in their ADLs and poorer self-reported health status. This relationship held regardless of whether or not the USC was certified as a PCMH practice. It is perhaps not surprising that respondents with existing chronic health conditions had greater odds than patients without health problems of reporting that they have a USC they can rely on to access services. The care needed to manage existing health problems is an entryway into the healthcare system and may serve as a conduit for patients to build a relationship with a provider or practice. However, access to care is important not only for managing existing health problems, but also for disease prevention. Our results suggest that not having a USC is associated with less preventive screening.

It is interesting to note that in addition to higher receipt of preventive services among patients with a USC, we also found higher odds of inappropriate prostate cancer screening in men 70 years and older. Having a USC is typically an indicator that individuals have better access to medical services. In addition to the benefits this might have in terms of improving preventive services, having a USC may also be an entry point to potentially inappropriate care in patients. It should be noted, however, that the inappropriate prostate screening measure is a new HEDIS guideline implemented in 2015 to reflect improved cancer screening guidelines and may take time to diffuse.17

Although, in general, PCMH practices had better odds of care delivery for the quality metrics selected in our study compared with non-PCMH practices, these results were neither clinically meaningful nor statistically significant. We should note that we did not have the ability to examine the individual practices that comprised the PCMH cohort, and there is evidence that practices vary in their adoption of the 6 core PCMH principles.11 One might expect that PCMH practices that incorporate and adhere more to these principles would be better positioned to improve care outcomes. Future research should consider which components of PCMH practice lead to better delivery of care.


As with any observational study, this study is subject to a number of limitations that should be considered when interpreting our results. This study relied on a cross-sectional survey of respondents and cannot establish temporality between relationships, given that data were collected on the exposure and the outcome during the same survey. One advantage of MEPS is the availability of rich patient demographic information, such as race, income, and education. Although we controlled for many variables that might confound the relationship between enrollment status and outcomes, the possibility for unmeasured confounding always exists. One specific source of confounding that we were unable to observe in this study is the general willingness of patients to receive preventive services. Patients with a greater understanding of the importance of and need for preventive screening are more likely to seek out a USC, which may lead them to receive these services at higher rates. Finally, the mental health follow-up variables had limited sample size in our data and the relationships should be treated as exploratory.


This study is the first to our knowledge to differentiate the effect of having a USC from the influence of whether or not a USC is PCMH certified on preventive service use among a nationally representative population of patients. Although evidence is growing that the PCMH model improves patient care, these benefits are not available to patients who cannot access the health system or do not have a USC provider. Efforts to improve access to a USC should not be forgotten in the push toward the adoption of PCMH principles.&ensp;Author Affiliations: University of Minnesota College of Pharmacy (JFF, AK), Minneapolis, MN; Center for Medication Optimization Through Practice and Policy, University of North Carolina Eshelman School of Pharmacy (BYU), Chapel Hill, NC; University of North Carolina Gillings School of Global Public Health (MED), Chapel Hill, NC.

Source of Funding: None.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (JFF, BYU, MED); acquisition of data (JFF); analysis and interpretation of data (JFF, AK, BYU, MED); drafting of the manuscript (JFF, AK, BYU); critical revision of the manuscript for important intellectual content (JFF, AK, BYU, MED); statistical analysis (JFF); provision of patients or study materials (JFF); administrative, technical, or logistic support (JFF); and supervision (JFF).

Address Correspondence to: Joel F. Farley, PhD, University of Minnesota College of Pharmacy, 7-159D Weaver-Densford Hall, 308 Harvard St SE, Minneapolis, MN 55455. Email:

1. Rothman AA, Wagner EH. Chronic illness management: what is the role of primary care? Ann Intern Med. 2003;138(3):256-261. doi: 10.7326/0003-4819-138-3-200302040-00034.

2. Bauer UE, Briss PA, Goodman RA, Bowman BA. Prevention of chronic disease in the 21st century: elimination of the leading preventable causes of premature death and disability in the USA. Lancet. 2014;384(9937):45-52. doi: 10.1016/S0140-6736(14)60648-6.

3. Rui P, Hing E, Okeyode T. National Ambulatory Medical Care Survey: 2014 state and national summary tables. CDC website. Accessed April 4, 2018.

4. National Center for Chronic Disease Prevention and Health Promotion. CDC website. Accessed April 1, 2019.

5. Bodenheimer T, Chen E, Bennett HD. Confronting the growing burden of chronic disease: can the U.S. health care workforce do the job? Health Aff (Millwood). 2009;28(1):64-74. doi: 10.1377/hlthaff.28.1.64.

6. Blewett LA, Johnson PJ, Lee B, Scal PB. When a usual source of care and usual provider matter: adult prevention and screening services. J Gen Intern Med. 2008;23(9):1354-1360. doi: 10.1007/s11606-008-0659-0.

7. Greenberg JO, Barnett ML, Spinks MA, Dudley JC, Frolkis JP. The “medical neighborhood”: integrating primary and specialty care for ambulatory patients. JAMA Intern Med. 2014;174(3):454-457. doi: 10.1001/jamainternmed.2013.14093.

8. Ouwens M, Wollersheim H, Hermens R, Hulscher M, Grol R. Integrated care programmes for chronically ill patients: a review of systematic reviews. Int J Qual Health Care. 2005;17(2):141-146. doi: 10.1093/intqhc/mzi016.

9. Scholle SH, Torda P, Peikes D, Han E, Genevro J. Engaging Patients and Families in the Medical Home. Rockville, MD: Agency for Healthcare Research and Quality; 2010. Accessed April 4, 2018.

10. National Committee for Quality Assurance. PCMH standards and guidelines. Kentucky Regional Extension Center website. Published September 30, 2017. Accessed April 4, 2018.

11. Jackson GL, Powers BJ, Chatterjee R, et al. The patient-centered medical home: a systematic review. Ann Intern Med. 2013;158(3):169-178. doi: 10.7326/0003-4819-158-3-201302050-00579.

12. van den Berk-Clark C, Doucette E, Rottnek F, et al. Do patient-centered medical homes improve health behaviors, outcomes, and experiences of low-income patients? a systematic review and meta-analysis. Health Serv Res. 2018;53(3):1777-1798. doi: 10.1111/1475-6773.12737.

13. Comprehensive Primary Care Plus. CMS website. Updated March 21, 2019. Accessed April 1, 2019.

14. Bowdoin JJ, Rodriguez-Monguio R, Puleo E, Keller D, Roche J. Associations between the patient-centered medical home and preventive care and healthcare quality for non-elderly adults with mental illness: a surveillance study analysis. BMC Health Serv Res. 2016;16(1):434. doi: 10.1186/s12913-016-1676-z.

15. Medical Expenditure Panel Survey (MEPS). Agency for Healthcare Research and Quality website. Updated August 2018. Accessed April 1, 2019.

16. Zodet M, Chowdhury S, Machlin S, Cohen J. Linked designs of the MEPS Medical Provider and Organization Surveys. In: Proceedings of the Joint Statistical Meetings Survey Research Methods Section; July 29-August 4, 2016; Chicago, IL.

17. 2017 Quality Rating System measure technical specifications. CMS website. Published September 2016. Accessed April 4, 2018.

18. PQA performance measures. Pharmacy Quality Alliance website. Accessed April 1, 2019.

19. Andersen RM. Revisiting the behavioral model and access to medical care: does it matter? J Health Soc Behav. 1995;36(1):1-10.

Related Videos
Related Content
© 2023 MJH Life Sciences
All rights reserved.