https://www.ajmc.com/journals/issue/2018/2018-vol24-n12/patient-experience-during-a-large-primary-care-practice-transformation-initiative
Patient Experience During a Large Primary Care Practice Transformation Initiative

Kaylyn E. Swankoski, MA; Deborah N. Peikes, PhD, MPA; Nikkilyn Morrison, MPPA; John J. Holland, BS; Nancy Duda, PhD; Nancy A. Clusen, MS; Timothy J. Day, MSPH; and Randall S. Brown, PhD

As CMS and other payers test the patient-centered medical home (PCMH) and similar models, and as they increasingly pay for care through alternative payment models that reward quality and value,1,2 it is important to measure how these efforts affect patient experience of care.

In 2012, CMS launched the Comprehensive Primary Care (CPC) initiative, a unique collaboration with 39 other payers to improve primary care delivery. The 4-year initiative helped practices implement 5 functions in their delivery of care—(1) access and continuity, (2) planned chronic and preventive care, (3) risk-stratified care management, (4) patient and caregiver engagement, and (5) coordination of care across the medical neighborhood. CMS selected 502 practices in 7 US regions to participate. To help practices improve care delivery, CPC provided enhanced payment, a robust learning system, and data feedback.3-5

CPC was expected to improve costs, quality, and patient experience of care. Patient-centeredness was a core tenet underlying CPC, and several features of this initiative aimed to improve patient experience of care. Practices were expected to improve access to care, engage patients to guide quality improvement, integrate culturally competent self-management support and shared decision making into usual care, and coordinate care across the patient’s providers. Also, CMS and some of the other participating payers considered patient experience when determining practice eligibility for shared savings payments.

Prior literature examining the effects of primary care transformation on patient experience, including a study examining the first 2 years of CPC, found few, generally small, statistically significant effects during the first 1 to 2 years of transformation.6-9 This paper examines the full 4 years of CPC to understand whether the lack of effects on patient experience for CPC and other primary care transformation models was due, in part, to short follow-up periods that did not allow adequate time for practice transformation to affect patient experience as intended. We examine how patient ratings from more than 25,000 Medicare fee-for-service (FFS) beneficiaries attributed to 490 CPC practices compare with those from more than 8000 beneficiaries in 736 comparison practices (selected using propensity score matching), in 2013 (8-12 months after CPC began) and in 2016 (5 months before CPC ended).

METHODS

Setting

For each CPC region, we used propensity score matching to select comparison practices from a pool of potential comparisons containing practices (1) in the same regions as CPC practices that had applied to CPC but were not selected and (2) in nearby areas with similar demographic and market factors that had enough practices for matching (ClinicalTrials.gov number NCT02320591).

We selected up to 5 comparison practices per CPC practice using “full matching” to form matched sets that contained 1 CPC and multiple comparison practices or 1 comparison and multiple CPC practices. The evaluation included the 497 CPC practices participating at the end of the first quarter of CPC and their 908 comparison practices. The comparison group had similar patient, practice, and market characteristics to the CPC practices before CPC began.5

Sample and Response Rates

We administered an annual survey to a cross-sectional sample of Medicare FFS beneficiaries attributed to all open CPC practices (regardless of whether they still participated) and comparison practices. Using claims data, Medicare beneficiaries were attributed to practices where they had the largest share of selected evaluation and management visits to primary care clinicians over the prior 2 years. We invited about 60,000 of the roughly 300,000 Medicare FFS beneficiaries attributed to CPC practices, and 20,000 of the approximately 600,000 beneficiaries attributed to comparison practices, to respond to each survey round (Table 1). We expected responses from 40 beneficiaries per CPC practice and 14 beneficiaries per matched set of comparison practices. We selected larger CPC samples to support CPC practice–level estimates used in practice-level feedback and CMS’ shared savings calculations.
Patient Experience Measures

Our survey instrument contains items from the Consumer Assessment of Healthcare Providers and Systems Clinician and Group survey with Patient-Centered Medical Home supplemental items (CAHPS-PCMH) version 2.0 and several additional questions that we developed about aspects of CPC.10 We measured 5 dimensions of patient experience during the prior year using 17 questions from the CAHPS-PCMH composite measures (shown in Table 2), following CAHPS scoring instructions.11 Although CMS and some other payers used these measures when determining whether practices received shared savings, CPC did not explicitly focus on each item.

In addition to the 17 questions in the 5 composites, the survey contained 30 questions about patients’ experiences with care emphasized by CPC. These include timely access to care and information; providers’ communication with patients, attention to patients’ behavioral health needs, coordination of care with specialists, and follow-up after hospital stays and emergency department (ED) visits; patient engagement in caring for chronic conditions; comprehensiveness of care; and patients’ overall rating of care received from the provider. A sixth CAHPS composite measure—providers’ knowledge of the care that patients received from other providers—was also used, but the 2 questions within that domain were examined separately because the composite had low reliability.6 Among the 30 questions not from the composite measures, 2 were asked only in 2013 and 2 only in 2016.

Survey Administration

We administered 4 rounds of the survey by mail during the 51-month initiative (June-October 2013; July-October 2014, 2015, and 2016). These occurred 8 to 12, 21 to 24, 33 to 36, and 45 to 48 months after CPC began, respectively.

Analysis

Analytic comparisons. Although we collected data from 4 survey rounds, we decided to compare ratings between CPC and comparison practices in 2013 (or the first year the question was asked) and in 2016 to observe where patient experience differed between the 2 groups early in and near the end of CPC. Using an intent-to-treat design, we surveyed beneficiaries in CPC practices regardless of whether the practice was still participating in CPC. We did not administer surveys to beneficiaries in practices that had closed more than 6 months before the survey.

Because we were not able to collect data before CPC, differences in any of the years may reflect pre-existing differences between CPC and comparison practices. In case CPC affected patient experience before the first survey, we did not calculate difference-in-differences estimates.

Our main analyses examine the proportion of respondents who answered each question with the best response. To test the sensitivity of these findings, we also analyzed the mean response.

Regression analysis. We calculated the predicted probability of answering the best response and the mean responses using logistic and ordinary least squares regressions, controlling for baseline beneficiary and practice characteristics, as well as education level reported on the survey. Because most questions were answered by more than 95% of survey respondents, we calculated findings among nonmissing data and did not adjust for question nonresponse. For all regressions, we weighted estimates using beneficiary-level nonresponse weights (to make the sample similar to all attributed beneficiaries) and practice-level matching weights (to ensure similar CPC and comparison samples). We adjusted standard errors to account for clustering of respondents within practices and matched sets and for respondents answering in multiple rounds.

We tested for subgroup effects in 2016 based on the beneficiary’s (1) practice’s affiliation with a healthcare system, because systems may have more resources to transform and coordinate care among their providers; (2) practice size (measured by the number of primary care clinicians in the practice in 2012, before CPC began: 1, 2-3, 4-5, or ≥6), because larger practices might provide more access and services, but there might be less direct contact between a patient and their specific clinician; and (3) relative health risk, because sicker patients might have more interactions with their practice and more need for varied types of services. Risk is measured by whether the respondent’s 2012 Hierarchical Condition Category score, a measure of risk for subsequent expenditures, was above or below the median for respondents across all survey rounds.12 See the eAppendix (available at ajmc.com) for more details.

Power. Using 2-tailed tests at the 5% significance level, the analysis had 80% power to detect effects of 1 to 3 percentage points over time and between CPC and comparison practices for the composite measures and for most individual questions. Exceptions were for questions that applied to a small proportion of respondents, such as beneficiaries who had phoned the provider’s office after hours, where we could detect differences of 7 to 12 percentage points.

Statistical and substantial importance. To limit multiple comparisons leading to false positives, we only considered responses between beneficiaries in CPC and comparison practices to be statistically different and of substantial importance if (1) the P value was less than .05 and (2) the difference between the 2 groups was larger than 5 percentage points (selected in consultation with CAHPS experts, because literature did not define what size difference would be substantively important for CAHPS or other patient experience measures). We took this approach rather than adjusting for multiple comparisons based on the American Statistical Association’s caution against overreliance on P values.13
RESULTS

Respondents

We excluded 7 or fewer CPC practices from the analysis in each round because they had closed more than 6 months before the survey. Depending on the round, we received completed surveys from 25,318 to 26,362 Medicare FFS beneficiaries in 495 to 497 CPC practices and 8088 to 9922 Medicare FFS beneficiaries in 811 to 908 comparison practices. Using survey responses, we excluded respondents who had not visited the practice in the year before they responded. The analytic sample included more than 25,000 beneficiaries attributed to 490 to 496 CPC practices and more than 8000 beneficiaries attributed to 736 to 818 comparison practices, depending on the round (Table 1). Response rates were between 44% and 48%. Sixteen percent of respondents answered in multiple rounds; fewer than 1% answered in all rounds.

Implementation of the Intervention

Most practices implemented the intervention and participated throughout the 4-year initiative. Practices were required to report Milestones quarterly to CMS and were put on a corrective action plan for not meeting Milestones. Between 5% and 15% of participating practices were put on a corrective action plan each year; most of these practices had a deficiency in 1 of 8 Milestones and were able to correct it within 2 quarters. Over the 4-year initiative, 60 (12%) of 497 practices withdrew or were terminated from the initiative. Of the 60, 29 left to become accountable care organizations, 40 left in the last 2 years of the initiative, and 9 were terminated.

Composite Measures

We tested the internal consistency reliability of the 4 composite measures that combine multiple questions; they demonstrated adequate reliability with McDonald’s omega values of 0.76 to 0.96.

For both CPC and comparison practices, ratings across the composites varied in 2013 (the first segment of each bar in the Figure). Three composites had more room for improvement (timely appointments, care, and information; providers support patients in taking care of their own health; and providers discuss medication decisions with patients); between 46% and 63% of beneficiaries gave their practices the best ratings. Beneficiaries’ ratings of the other 2 composite measures—providers’ communication with patients and patients’ overall rating of the provider—were already fairly high in 2013, with more than 75% of the responding beneficiaries providing the most favorable responses.

Regardless of opportunities for improvement in the 2013 scores, improvements in beneficiaries’ ratings between 2013 and 2016 were minimal (less than 3 percentage points) for 4 of the 5 composite measures. The second segment in the Figure shows the changes over time for each composite measure; because changes were small, the second segment is barely visible for most composites. The exception was the composite measure for providers support patients in taking care of their own health. Between 2013 and 2016, both CPC and comparison practices experienced a statistically significant and meaningful improvement in beneficiaries’ ratings of this composite of 5 to 6 percentage points (P <.001).

Overall, CPC did not improve beneficiary ratings for the 5 composite measures, with CPC and comparison practices reporting comparable ratings for each measure in 2013 and in 2016. Results for mean responses were also comparable for CPC and comparison practices (Table 3). There were no differential effects of CPC on beneficiaries’ ratings in 2016 for the 3 subgroups (eAppendix Table 3).

Individual Questions Not in the Composite Measures

Responses to the 28 questions asked in 2013 or 2014 and the 28 asked in 2016 that were not in the composite measures also indicate that beneficiaries’ experiences with care were generally comparable in CPC and comparison practices over the 4-year initiative (eAppendix Table 4). There were no meaningful differences in beneficiaries’ ratings for 26 of the 28 questions in 2013 (or the earliest year the question was asked) and 25 of the 28 questions asked in 2016.

The notable exceptions were that more beneficiaries in CPC than comparison practices reported receiving follow-up care after hospital and ED visits both early in and near the end of CPC. In 2013, more beneficiaries in CPC than in comparison practices who stayed in a hospital overnight or longer in the previous year reported that they saw a doctor, nurse practitioner, or physician assistant in the provider’s office within 2 weeks of the most recent hospitalization (70% vs 65%, respectively; P = .002). In 2014, the first year the question was asked, more beneficiaries in CPC than in comparison practices who visited the ED for care in the previous year reported that they were contacted by their provider’s office within 1 week of the most recent visit (53% vs 48%, respectively; P <.001).

In 2016, more beneficiaries in CPC than in comparison practices reported that they were contacted by their provider’s office within 3 days of a hospital discharge (60% vs 50%, respectively; P <.001). Beginning in 2014, to align with the 2014 CPC requirements, we shortened the follow-up window in the 2013 question and no longer limited the follow-up to occur in the practice. More beneficiaries in CPC than in comparison practices reported that they were contacted by the provider’s office within 1 week of the most recent ED visit (59% vs 51%, respectively; P <.001).

There was 1 unfavorable difference. Fewer beneficiaries in CPC practices than in comparison practices reported in 2016 that they always received an answer to their medical question as soon as needed when emailing their provider (69% vs 75%, respectively; P = .039). However, more than 92% of beneficiaries in both CPC and comparison practices reported that they did not email their provider with a medical question in the past 12 months and therefore did not answer this question.

Findings were similar for mean responses (eAppendix Table 6).

Overall rating of providers and care. Despite responses indicating opportunities to improve care delivery, beneficiaries were generally pleased with their providers. Roughly 80% of beneficiaries in both CPC and comparison practices rated their provider as a 9 or 10 out of 10 in 2013 and 2016. In 2014, the survey began asking beneficiaries to compare the care they received in the past 12 months with the care they received at the practice in the previous year. In each of the 3 years this question was asked, about 17% of beneficiaries in CPC and comparison practices reported that the care they received from the provider was much better than in the prior year; about two-thirds reported that the care compared with 1 year ago was about the same (data not shown).
DISCUSSION

These findings suggest that although CPC practices were undergoing substantial changes to improve care delivery, CPC did not alter patient experience. The areas in which there were effects—the larger percentages of beneficiaries in CPC practices who reported that their provider followed up with them after hospital stays and ED visits—reflect CPC’s emphasis on increasing primary care involvement after acute care.

Prior studies have found mixed effects of PCMH adoption on patient experience. These studies examined patient experience after a shorter exposure (1-2 years) of their practices to transformation. Four studies that looked at the impact of medical home transformation on patient experience of care found no statistically significant effects on patient experience 1 to 2 years after the intervention began.14-17 Four other studies (2 of which focused on the same intervention) found favorable and statistically significant, but generally relatively small or isolated, effects in some dimensions of patient experience.6-9

The first of these studies found small (2%-3%) improvements in 6 of 7 domains: quality of doctor–patient interactions, shared decision making, coordination of care, access, patient activation and involvement, and goal setting and tailoring.7,8 The second study added 1 year of follow-up and found that effects had moderated, with small differences between the 2 groups in only 4 of the 7 domains. These studies were tested in 1 clinic with 2 comparison clinics. The third study found an 8-percentage-point improvement in access but no effects in 6 other domains.9 This study did not have a comparison group to net out secular trends potentially affecting patient experience. The fourth study examined the 2-year effects of CPC and found small (2-4 percentage points) favorable effects on 3 composite measures (getting timely appointments, care, and information; providers support patients in taking care of their own health; and providers discuss medication decisions with patients) and no effects on 3 other measures. These early favorable findings were small and driven by small improvements over time for CPC practices and small declines for comparison practices.6

Limitations

The main limitation to our study is that the comparison group was not chosen experimentally, and we could not obtain a list of patients in time to survey patients before the initiative began. Therefore, differences between patient ratings over time for the CPC and comparison practices may reflect baseline differences, in addition to the effects of CPC. Another limitation is that the CAHPS questions focused on care from the provider during visits—which is only 1 aspect of care that CPC aimed to affect—and did not include other targets of CPC, such as increasing the use of team-based care and delivering care in other settings aside from the office. The survey also did not assess whether CPC practices were stinting on care in response to the demands of practice transformation and the financial incentives of the CPC initiative (although given no CPC–comparison differences in their ratings of their providers or access to care, this seems unlikely). In addition, the analysis included survey responses from beneficiaries in practices that were no longer participating in CPC. It is unlikely, however, that the lack of meaningful effects of practice transformation on patient experience in this study is due to practices not implementing the model, given the relatively small number of practices that were placed on corrective action plans or left the initiative.

CONCLUSIONS

Despite these limitations, these 4-year findings provide one of the longest perspectives on the effect of a care delivery model on patient experience and cover more than 33,000 beneficiaries in each round. The findings are relevant to CMS’ new CPC+ model, which began in 2017 and covers more than 2 million attributed Medicare beneficiaries,18-20 and the other advanced payment models that CMS and other payers are promoting. As the healthcare system moves from FFS to new models that reward value, maintaining and improving patient experience of care is critically important. These results allay concerns that the disruptions inherent in primary care transformation and payment reform impair patient experience, but they raise questions about how future primary care initiatives can succeed in improving patient experience.
Print | AJMC Printing...