• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Patient Experience Midway Through a Large Primary Care Practice Transformation Initiative

Publication
Article
The American Journal of Managed CareMarch 2017
Volume 23
Issue 3

Practice transformation toward comprehensive primary care slightly improved patient experience in 3 of 6 domains of care: access, provider support, and shared decision making.

ABSTRACTObjectives: To determine how the multipayer Comprehensive Primary Care (CPC) initiative that transforms primary care delivery affects the patient experience of Medicare fee-for-service beneficiaries. The study examines how experience changed between the first and second years of CPC, how ratings of CPC practices have changed relative to ratings of comparison practices, and areas in which practices still have opportunities to improve patient experience.

Study Design: Prospective study using 2 serial cross-sectional samples of more than 25,000 Medicare fee-for-service beneficiaries attributed to 496 CPC practices and nearly 9000 beneficiaries attributed to 792 comparison practices.

Methods: We analyzed patient experience 8 to 12 months and 21 to 24 months after CPC began, measured using 6 domains of the Consumer Assessment of Healthcare Providers and Systems Clinician and Group 12-Month Survey with Patient-Centered Medical Home supplemental items. We compared changes over time in patients giving the best responses between CPC and comparison practices using a regression-adjusted difference-in-differences analysis.

Results: Patient ratings of care over time were generally comparable for CPC and comparison practices, with slightly more favorable differences—generally of small magnitude—for CPC practices than expected by chance. There were small, statistically significant, favorable effects for 2 of 6 composite measures measured using both the proportion giving the best responses and mean responses: getting timely appointments, care, and information; providers support patients in taking care of their own health; and providers discuss medication decisions. There was an additional small favorable effect on the proportion of patients giving the best response in getting timely appointments, care, and information; there was no effect on the mean.

Conclusions: During the first 2 years of CPC, CPC practices showed slightly better year-to-year patient experience ratings for selected items, indicating that transformation did not negatively affect patient experience and improved some aspects slightly. Patient ratings for the 2 groups were generally comparable, and both faced substantial room for improvement.

Am J Manag Care. 2017;23(3):178-184

Takeaway Points

The 4-year Comprehensive Primary Care (CPC) initiative aimed to transform primary care delivery.

  • Two years into CPC, Medicare patient ratings of care over time were generally comparable for CPC and comparison practices.
  • There were statistically significant favorable effects in the proportion of patients giving the best responses for 3 of 6 composite measures of the Consumer Assessment of Healthcare Providers and Systems Clinician and Group 12-Month Survey with Patient-Centered Medical Home supplemental items: getting timely appointments, care, and information (2.1 percentage points); providers support patients in taking care of their own health (3.8 percentage points); and providers discuss medication decisions with patients (3.2 percentage points).
  • Results suggest that transforming care during the first 2 years of CPC did not negatively affect patient experience but did generate some small improvements.

CMS is seeking to tie 50% of payments to quality or value through alternative payment models by 20181 by working with payers around the country to test the patient-centered medical home (PCMH) and similar models to improve primary care delivery and pay for value instead of volume.2 Thus, it is important to measure how this transformation is affecting the way in which patients experience care and to identify opportunities to continue to improve patient experience.

In a unique collaboration, CMS and 39 public and private healthcare payers launched the Comprehensive Primary Care (CPC) initiative in October 2012 to improve primary care delivery in the United States. CPC helped practices implement 5 key functions in their care delivery—1) access and continuity, 2) planned chronic and preventive care, 3) risk-stratified care management, 4) patient and caregiver engagement, and 5) coordination of care across the medical neighborhood—supported by continuous data-driven improvement, enhanced accountable payment, and optimal use of health information technology. CMS selected 502 practices in 7 US regions to participate. To help participating practices improve their care delivery, CPC provided them with enhanced payment, a learning system, and data feedback during the 4-year initiative.3-5

CPC aimed to improve cost, quality, and patient experience of care. This paper focuses on patient experience, examining how the ratings of more than 25,000 Medicare fee-for-service (FFS) beneficiaries attributed to 496 practices participating in CPC at the time of the first survey changed between the first and second years of CPC. This paper also identifies how ratings of CPC practices changed relative to the ratings of comparison practices, selected using propensity score matching,6 and areas where practices could still improve.

Patient-centeredness was a core tenet of the model, and several aspects of CPC aimed to improve patient experience of care. Practices were expected to provide better access to care, engage patients in order to guide quality improvement through surveys and/or a patient and family advisory council, integrate culturally competent self-management support and shared decision-making tools into care, coordinate care across the medical neighborhood, and use a personalized plan of care for high-risk patients. In addition, patient experience was used to help determine eligibility for shared savings payments.

METHODS

Overview

We conducted a repeated cross-sectional study using a large sample of Medicare FFS beneficiaries attributed to CPC practices and to a set of comparison practices selected using propensity score matching to have similar market-, practice-, and patient-level characteristics before CPC began. We examined changes in patient ratings and used difference-in-differences (DID) to evaluate how CPC practices’ ratings improved relative to comparison practices between 1 year (8 to 12 months) and 2 years (21 to 24 months) after CPC began. We did not draw inferences about effects from tests of each hypothesis separately, but rather from the findings across the set of questions and composites, particularly the summary composites.

Setting

CPC practices are primary care practices in 7 US regions: 4 states (Arkansas, Colorado, New Jersey, and Oregon) and 3 geographic areas (Cincinnati—Dayton [Ohio and Kentucky], Capital District–Hudson Valley [New York], and Greater Tulsa, Oklahoma). We drew comparison practices from: 1) those that had applied to CPC in the same regions as the CPC practices but were not selected, and 2) those in areas near the CPC regions that had reasonably similar demographics and market factors and had enough practices for matching.

Sample and Response Rates

Using Medicare claims data, we attributed Medicare FFS beneficiaries to practices where they had most of their evaluation and management visits to primary care clinicians over the prior 2 years; using survey data, we identified attributed Medicare FFS beneficiaries who had visited the practice at least once in the 12 months before the survey round began.

In each survey round, we mailed questionnaires to a random sample of an average of 119 attributed Medicare FFS patients from each CPC practice and an average of 24 attributed Medicare FFS patients from comparison practices. These sample sizes aimed to yield completed surveys with at least 40 attributed Medicare FFS respondents per CPC practice and 14 respondents per matched set of comparison practices (the larger sample in CPC practices supported practice-level feedback). We followed the National Committee for Quality Assurance’s sampling guidelines for the number of patients to sample in each practice; more patients were sampled in practices with more clinicians.7 The average number of completed surveys was 53 per CPC practice and 18 per comparison practice set, exceeding our targets of 40 and 14, respectively.

In 2013, we obtained response rates of 45% and 46% for CPC and comparison practices, respectively. We then excluded patients from 2 of the 497 CPC practices and their comparison matched sets because the calculated weights of the patients in those practices—which combined matching weights and nonresponse weights—were large outliers and would have unduly influenced the results. This left samples of 25,843 Medicare FFS patients in 495 CPC practices and 8949 Medicare FFS patients in 818 comparison practices. For the 2014 survey, we sampled patients from 496 CPC practices: 2 of the 497 total CPC practices in 2013 closed in summer/fall 2013 and 1 split into 2 practices in 2014. Similarly, the number of comparison practices in our sample fell from 818 in 2013 to 792 in 2014. In 2014, response rates were 48% and 47% for CPC and comparison practices, respectively. The final sample for the 2014 survey contained 26,356 Medicare FFS patients in 496 CPC practices and 8865 Medicare FFS patients in 792 comparison practices. About 15% of respondents replied in both survey rounds.

Measurement of Patient Experience

Our patient survey instrument contains items from the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Clinician and Group 12-Month Survey with Patient-Centered Medical Home supplemental items.8 The survey asks patients about their experience of care over the previous 12 months across 6 dimensions of primary care: 1) patients’ ability to get timely appointments, care, and information; 2) how well providers communicate; 3) providers’ knowledge of the care patients received from other providers; 4) if providers support patients in taking care of their own health; 5) if providers discuss medication decisions with patients; and 6) patients’ overall rating of their primary care provider. To summarize patient experience of care, we created 6 summary composite measures using 19 questions following the CAHPS Clinician and Group survey scoring instructions.9 Table 1 details the specific patient care experiences that the 6 summary composite measures evaluate. Although CMS and some other payers used these composite measures to help determine whether practices received shared savings, CPC did not focus explicitly on each item. In addition to the 19 questions in the 6 summary measures, 25 other questions gauged patient experience of care, yielding 44 total questions (listed in eAppendix Table A [eAppendices available at ajmc.com]).

Survey Administration

We administered 2 rounds of the survey: 1) June through October 2013, 8 to 12 months after CPC began; and 2) July through October 2014, 21 to 24 months after CPC began. All surveys were administered by mail, following the CAHPS Clinician and Group survey instructions, with slightly modified timing of mailings.

Analysis

We analyzed both the proportion of patients who gave the best (most favorable) response (response scales varied from 2-point [yes/no] to 11-point [0 to 10 rating scale]) and mean response. Our main analysis is on the best responses. Examples of these responses are: 1) the provider always explained things to the patient in a way that was easy to understand; 2) in the last 12 months, between visits, yes, the patient did receive reminders about tests, treatment, or appointments from the provider’s office; and 3) the patient got an appointment for care needed right away that same day.

We first calculated the likelihood that patients responded to a question with the best response using logistic regressions, controlling for baseline patient and practice characteristics and education level reported on the survey. We calculated predicted probabilities for each of the 44 questions (eAppendix Table A reports results from each question).

In addition to analyzing responses to individual questions, we looked at the 6 summary composite measures containing 19 of the 36 questions asked in both rounds, following the CAHPS Clinician and Group survey scoring instructions.9 We first calculated patient-level composite measures by averaging nonmissing binary indicators for whether the patient’s response was the best option across each question in the composite. (That is, if the composite contained 4 questions and the respondent answered all 4 and gave the best response for 3, the patient’s score was 0.75.) Ordinary least squares regressions controlled for baseline patient and practice characteristics and the respondent’s education level.

For the 36 questions in both survey rounds and the 6 composite measures, we used DID to compare the changes over time between CPC and comparison practices in the proportion of patients who gave the best response. For the 8 questions that were asked in only 1 survey round, we tested the within-year difference between CPC and comparison practices.

For all regressions, we weighted estimates using practice-level nonresponse and matching weights (to ensure that CPC and comparison samples were similar) and clustered standard errors at the practice level to account for practice-level clustering and respondents answering in more than 1 round. We considered P <.10 to be statistically significant (but relied on combined findings across related measures to draw inferences about whether the results were likely to be true effects or chance differences). The analysis had 80% power to detect small differences of 1 to 2 percentage points between CPC and comparison practices. To test the sensitivity of our findings, we repeated the analysis using mean responses (standardized on a 0-to-1 scale). (See the eAppendix for full description of methods.)

RESULTS

Composite Measures

We tested the internal consistency reliability for 5 of the 6 composite measures that combine multiple questions (the composite measure for patients’ rating of the provider contains only 1 question). Four of the 5 composite measures had adequate reliability with McDonald’s omega values between 0.80 and 0.96. One composite—providers’ knowledge of the care patient received from other providers&mdash;had less reliability (McDonald’s omega = 0.54).

There were small, statistically significant, favorable effects of CPC on the percentage of respondents choosing the best responses for 3 of 6 composite measures: 1) getting timely appointments, care, and information (2.1 percentage points; P = .046); 2) providers support patients in taking care of their own health (3.8 percentage points; P <.001); and 3) providers discuss medication decisions with patients (3.2 percentage points; P = .006) (Figure and Table 2). These were driven by small (less than 2 percentage points) year-to-year improvements for CPC practices and small declines (less than 2 percentage points) for comparison practices. We did not adjust for multiple comparisons; however, estimated effects for half of the 6 composite measures are statistically significant at the .05 level and 2 are significant at the .01 level, so we are confident concluding there were small effects.

Despite CPC practices showing improvement over comparisons in these 3 areas of care, only about half of CPC and comparison patients gave the best responses for getting timely appointments, care, and information, and for providers supporting patients in taking care of their own health, indicating room for improvement. There was less room for improvement in the other 3 composites where differences between CPC and comparison practices were not statistically significant—how well providers communicate, providers’ knowledge of the care patients received from other providers, and patients’ rating of providers&mdash;as more than 75% of CPC and comparison patients gave the best responses (Table 2).

To understand the factors driving the composite measure results, we turned to the 19 questions in the composite measures (eAppendix Table A). There were statistically significant and favorable effects of CPC for 5 of the 19 questions—a result of small year-to-year improvements for CPC practices coupled with small declines over time for comparison practices. For 2 of these 5 questions, changes from 2013 to 2014 in the proportion of CPC patients providing the best responses increased between 2 and 3 percentage points: 1) someone in the provider’s office asked the patient during the last 12 months whether there were things that made it hard for the patient to take care of his or her health (increased 2.8 percentage points to 35.6% in 2014), and 2) if a patient talked about starting/stopping a prescription medicine, the provider asked what the patient thought was best (increased 2.3 percentage points to 78.1% in 2014). For the other 3 questions that showed favorable effects of CPC, the year-to-year increases among CPC patients were only 0.2 to 1.1 percentage points. The year-to-year changes in CPC practices for the remaining 14 questions were not statistically different from comparison practices.

Mean responses yielded results similar to the best responses. There were small favorable effects of CPC in 2 composite measures: providers support patients in taking care of their own health (0.04 points on a 1-point scale; P <.001) and providers discuss medication decisions with patients (0.02 points; P = .006). Unlike the analysis using the best responses, the favorable difference on the composite measure for getting timely appointments, care, and information was not statistically significant (P = .117). The other 3 composite measures showed no statistically significant CPC-comparison differences (Table 2). (See eAppendix Table B for complete results.)

Question-Specific Results

Looking at all 36 questions asked in both survey rounds, the change in patients giving the best ratings of care over time was generally comparable for CPC and comparison practices, with slightly more—generally small&mdash;differences favoring the CPC practices than we would expect by chance. There were no statistically significant CPC-comparison differences in 75% of the 36 comparisons. Twenty-five percent&mdash;versus the 5% expected to occur by chance&mdash;showed more favorable ratings for CPC than comparison practices. However, year-to-year improvements for the CPC practices were small (5 percentage points or less) for 8 of these 9 measures, and CPC practices experienced a small decline for the other measure (eAppendix Table A).

Reflecting larger issues with healthcare delivery, CPC and comparison practices both had sizable opportunities for improvement. In 2014, for example, just 43% of patients in both groups answered that when they phoned their provider’s office for care needed right away, they usually got an appointment that same day and only 29% of patients answered that they always saw the provider within 15 minutes of appointment time when they had an appointment—2 measures of access. About a third of patients reported that practice staff spoke with them about a personal, family, mental, emotional, or substance abuse problem in the past year, suggesting the need for more screening for mental health issues. Another third of patients reported that someone at the practice had asked whether there are things that make it hard for the patient to take care of his or her health, suggesting room for more patient engagement (eAppendix Table A). Ratings were higher, but with room for improvement, for many other questions, including some on which CPC practices had statistically significant improvements relative to comparison practices.

Despite giving responses that indicate opportunities for improvement in many aspects of care, patients were pleased with their providers. Overall, three-fourths of both CPC and comparison patients rated their provider as a 9 or 10 out of 10 in both survey rounds.

There were 8 questions asked in only 1 round: 2 in 2013 and 6 in 2014. CPC patients were more likely than comparison patients to give the best responses for 4 of the 8 questions and equally likely to give the best responses for the other 4 questions. Three of the 4 favorable differences relate to transitional care. CPC required practices to improve coordination of care across the medical neighborhood, including patient follow-up after hospital stays and emergency department (ED) visits. CPC patients were more likely than comparison patients to report that someone in the provider’s office contacted them within 2 weeks after their most recent hospital stay in 2013 (or within 3 days after the most recent hospital stay in the 2014 survey) and that they were contacted by their provider’s office within 1 week of their most recent ED visit in 2014. These favorable differences for CPC, although statistically significant, were modest: between 3 and 5 percentage points (eAppendix Table A).

In 2014, CPC patients were also slightly more likely than comparison patients to report that they always received a copy of their treatment plan when receiving care for a chronic condition (46% vs 43%). But comparable proportions of CPC and comparison patients reported that they were always asked their ideas or goals when making a treatment plan (36%) (eAppendix Table A).

DISCUSSION

Two years after CPC began, CPC practices showed small improvements in the proportion of patients giving the best ratings for 3 of 6 CAHPS Clinician and Group composite measures relative to comparison practices, driven by slightly better year-to-year changes in the proportion of patients giving the best ratings of care on selected questions in the composites than expected by chance. Comparing the changes in mean responses over time between CPC and comparison practices also demonstrated small favorable effects of CPC in 2 of these 3 composite measures. Considering all 44 questions, there were slightly more favorable effects than expected by chance, again small. Generally, the proportions of patients giving the best ratings, as well as mean responses, were mostly comparable for CPC and comparison practices. These results suggest that the significant changes in care delivery during the first 2 years of CPC made minor improvements in patient experience and did not negatively affect it.

Prior studies found mixed effects of PCMH adoption on patient experience, measured using different patient survey instruments. Four studies of the impact of medical home transformation on patient experience found no statistically significant effects 1 to 2 years after the intervention began.10-13 Similar to these CPC findings 2 years into the initiative, 3 studies found statistically significant, favorable, but generally relatively small or isolated, effects in some dimensions of patient experience with care.14-16 For example, Kern et al (2013) followed up with patients 15 months after practice transformation and found statistically significant improvement at the 5% level in the proportion of respondents giving the best rating in the access to care domain (from 61% while the practices transformed to 69% at follow-up) and statistically significant improvement at the 10% level in experience with office staff (from 72% to 78%). The proportion of respondents giving the best ratings in the domain for follow-up with test results showed a statistically significant decline at the 10% level over time, from 76% to 69%. There were no effects in the other dimensions of patient experience that they measured: communication and relationships, disease management, doctor communication, and overall rating of the doctor. However, the study did not have a comparison group to net out any secular trends that may have affected patient experience.

Limitations

The main limitation to this study is that the comparison group was not chosen experimentally; therefore, differences between patient ratings over time for the CPC and comparison practices might reflect unmeasured preexisting differences between the groups of patients, in addition to the effects of CPC. Further, we could not obtain a list of patients to sample in time to survey patients before the initiative began. Therefore, the DID estimates might understate the true effects of CPC because CPC practices may have already made some improvements between the start of CPC and the first survey round that began 8 months later. Alternatively, these estimates might overstate the extent that changes (and possible disruptions) during the first year of CPC led to short-term negative effects on patients in CPC practices. Indeed, the proportions of patients giving the best responses to CPC practices were generally 1 to 3 percentage points lower than for comparison practices for 35 of 38 questions in the 2013 survey and for all 6 composite measures.

CONCLUSIONS

Despite these limitations, these findings are a timely and important contribution, as CMS launched the new CPC Plus model, the largest investment in advanced primary care to date.17 As the US healthcare system moves from traditional FFS to new models that reward value instead of volume, maintaining and improving patient experience of care will be critically important. Our results allay concerns that the disruptions inherent in the early stages of primary care transformation and payment reform impair patient experience of care. The small improvements in patient experience also suggest that future efforts in primary care may be a path toward improved patient experience.&ensp;

Acknowledgments

The authors thank Nancy Duda and Karen Bogen for their role in developing and administering the survey instrument, and Sabrina Rahman for support with statistical sampling.

Author Affiliations: Mathematica Policy Research, Princeton, NJ (KES, DNP, RSB); Mathematica Policy Research, Chicago, IL (SBD, NM); Mathematica Policy Research, Washington, DC (NAC, JJH); CMS (TJD), Baltimore, MD.

Source of Funding: Centers for Medicare & Medicaid Services (Contract # HHSM-500-2010-00026I/HHSM-500-T0006).

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (KES, DNP, SBD, NM, TJD, RSB); acquisition of data (DNP, NAC, NM, JJH, RSB); analysis and interpretation of data (KES, DNP, SBD, JJH, RSB); drafting of the manuscript (KES, DNP); critical revision of the manuscript for important intellectual content (DNP, SBD, TJD); statistical analysis (KES, DNP, NAC, JJH); provision of patients or study materials (NM); obtaining funding (DNP, RSB); administrative, technical, or logistic support (NM, TJD); and supervision (DNP, SBD, RSB).

Address Correspondence to: Kaylyn E. Swankoski, MA, Mathematica Policy Research, PO Box 2393, Princeton, NJ 08543. E-mail: KSwankoski@mathematica-mpr.com.

REFERENCES

1. The Medicare Access & CHIP Reauthorization Act of 2015: path to value. CMS website. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-LAN-PPT.pdf. Accessed June 2, 2016.

2. Primary care innovations and PCMH map. Patient-Centered Primary Care Collaborative website. https://www.pcpcc.org/initiatives. Accessed September 17, 2015.

3. Comprehensive Primary Care Initiative. CMS website. http://innovation.cms.gov/initiatives/comprehensive-primary-care-initiative/. Updated August 27, 2015. Accessed October 5, 2015.

4. Peikes D, Taylor EF, Dale S, et al; Mathematica Policy Research. Evaluation of the Comprehensive Primary Care Initiative: second annual report. CMS website. https://innovation.cms.gov/Files/reports/cpci-evalrpt2.pdf. Published April 2016. Accessed June 2, 2016.

5. Dale SB, Ghosh A, Peikes DN. Two-year costs and quality in the Comprehensive Primary Care Initiative. N Engl J Med. 2016;374(24):2345-2356. doi: 10.1056/NEJMsa1414953.

6. Taylor EF, Dale S, Peikes D, et al; Mathematica Policy Research. Evaluation of the Comprehensive Primary Care Initiative: first annual report. CMS website. https://innovation.cms.gov/Files/reports/cpci-evalrpt1.pdf. Published January 2015. Accessed September 21, 2015.

7. National Committee for Quality Assurance. Specifications for the CAHPS PCMH Survey 2012. Washington, DC: National Committee for Quality Assurance; 2011.

8. CAHPS Clinician & Group Surveys. Agency for Healthcare Research and Quality website. https://cahps.ahrq.gov/surveys-guidance/cg/index.html. Published September 2011. Accessed August 6, 2012.

9. Patient Experience Measures from the CAHPS Clinician & Group Surveys. Agency for Healthcare Research and Quality website. https://www.ahrq.gov/cahps/surveys-guidance/cg/instructions/index.html. Published May 1, 2012. Accessed December 6, 2012.

10. Heyworth L, Bitton A, Lipsitz SR, et al. Patient-centered medical home transformation with payment reform: patient experience outcomes. Am J Manag Care. 2014;20(1):26-33.

11. Jaén CR, Ferrer RL, Miller WL, et al. Patient outcomes at 26 months in the patient-centered medical home National Demonstration Project. Ann Fam Med. 2010;(8, suppl 1):S57-S67. doi: 10.1370/afm.1121.

12. Maeng DD, Davis DE, Tomcavage J, Graf TR, Procopio KM. Improving patient experience by transforming primary care: evidence from Geisinger’s patient-centered medical homes. Popul Health Manag. 2013;16(3):157-163. doi: 10.1089/pop.2012.0048.

13. Reddy A, Canamucio A, Werner RM. Impact of the patient-centered medical home on veterans’ experience of care. Am J Manag Care. 2015;21(6):413-421.

14. Reid RJ, Fishman PA, Yu O, et al. Patient-centered medical home demonstration: a prospective, quasi-experimental, before and after evaluation. Am J Manag Care. 2009;15(9):e71-e87.

15. Reid RJ, Coleman K, Johnson EA, et al. The Group Health medical home at year two: cost savings, higher patient satisfaction, and less burnout for providers. Health Aff (Millwood). 2010:29(5):835-843. doi: 10.1377/hlthaff.2010.0158.

16. Kern LM, Dhopeshwarkar RV, Edwards A, Kaushal R. Patient experience over time in patient-centered medical homes. Am J Manag Care. 2013;19(5):403-410.

17. Sessums LL, McHugh SJ, Rajkumar R. Medicare’s vision for advanced primary care: new directions for care delivery and payment. JAMA. 2016;315(24):2665-2666. doi: 10.1001/jama.2016.4472.

Related Videos
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.