• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Patient-Centered Oncology Care: Impact on Utilization, Patient Experiences, and Quality

Publication
Article
The American Journal of Managed CareSeptember 2020
Volume 26
Issue 09

Practices implementing a patient-centered oncology care pilot had improved quality, but utilization and patient experiences did not differ from comparison practices.

ABSTRACT

Objectives: To determine whether implementation of patient-centered oncology standards in 5 medical oncology practices improved patient experiences and quality and reduced emergency department (ED) and hospital use.

Study Design: Retrospective, pre-post study design with a concurrent nonrandomized control group.

Methods: We used insurance claims to calculate all-cause hospitalizations and ED visits and primary care and specialist office visits (n = 28,826 eligible patients during baseline and 30,843 during follow-up) and identify patients for a care experiences survey (n = 715 preintervention and 437 postintervention respondents). For utilization and patient experience outcomes, we compared pilot practices’ performance with 18 comparison practices using difference-in-differences (DID) regression models accounting for practice-level clustering. We assessed pilot practice performance on 31 quality measures from the American Society of Clinical Oncology Quality Oncology Practice Initiative program.

Results: There were no statistically significant differences in hospital, ED, or primary care visits between the pilot and comparison groups over time, but there was a significant increase in specialty visits for the pilot group (adjusted DID of 0.07; 95% CI, 0.01-0.13; P = .03). For care experiences, pilot practices improved more on shared decision-making (4.03 DID composite score; P = .013), whereas the comparison group improved more on access (–6.36 DID composite score; P < .001) and exchanging information (–4.25 DID composite score; P = .013). On average, pilot practices improved performance on 65% of core quality measures from baseline to follow-up.

Conclusions: This pilot of patient-centered oncology care showed improved quality but no impact on hospitalizations/ED use and mixed results for patient experiences. Findings are consistent with early evaluations of primary care patient-centered medical homes.

Am J Manag Care. 2020;26(9):372-380. https://doi.org/10.37765/ajmc.2020.88487

_____

Takeaway Points

Practices implementing a patient-centered oncology care pilot demonstrated improvement on patient-centered process measures but no impact on utilization and mixed results for patient experiences relative to comparison practices.

  • The greatest process measure improvements occurred in symptom assessment and care planning.
  • There were no statistically significant differences in hospital, emergency department, and primary care utilization, but there was a significant increase in specialty provider visits for the pilot group.
  • Pilot practices improved in shared decision-making, whereas comparison practices improved in access and exchanging information.
  • Findings are consistent with early evaluations of primary care patient-centered medical homes.

_____

Cancer is the second leading cause of death in the United States,1,2 and about 14.5 million individuals currently live with cancer.3 There is ample evidence of cancer care quality gaps, including non–evidence-based treatment and poor communication, care planning, and coordination.4 Patients with cancer report issues with receiving insufficient or inadequate information.5-7 The Institute of Medicine, now the National Academy of Medicine (NAM), noted that communication problems often contribute to poor outcomes, and other studies show that clinicians ask for patient preferences in medical decisions only about half the time.4,8-11 Often, patients undergoing treatment for advanced cancer do not understand that treatment is not aimed at curing the disease, do not have discussions about treatment preference, or have delays in palliative care discussions that result in active treatment being prolonged during the last weeks of life, contrary to patient preferences.4,10,12-14

Patients with cancer often see multiple providers and undergo multiple treatment modalities. Although medical oncologists are typically considered the “captains” of the cancer treatment team, initial diagnosis and treatment may be decided during consultations with a primary care provider and surgeon and without the opportunity for a fully informed decision process or development of a comprehensive treatment plan.10 The lack of clarity in roles continues through treatment and survivorship.15 Current payment systems exacerbate these integration issues. Oncologists are unable to bill for shared decision-making, services to help patients navigate the health care system, or support for emotional problems.4

The National Committee for Quality Assurance (NCQA) defined a new model for patient-centered oncology care based on multiple reports from the NAM8 that have called for greater attention to improving delivery of patient-centered care for patients with cancer. The standards for patient-centered oncology care are built on the chassis of NCQA’s successful Patient-Centered Medical Home (PCMH) program, in which primary care practices are responsible for coordinating accessible, continuous, and team-based care for patients, and the Patient-Centered Specialty Care program, which promotes interactions between PCMH practices and “neighbor” specialty practices.16 While supporting this concept of the neighbor, leading oncology societies have suggested that oncology practices may serve in both roles of the “home” for patients (particularly during active treatment) as well as the “neighbor” during periods of transition and survivorship.17,18 To develop the patient-centered oncology model, NCQA first convened a multistakeholder advisory panel to identify the components of the patient-centered specialty practice standards that should be built out for oncology, as well as items from the PCMH program that are relevant, and then reviewed the recommendations with other multistakeholder committees. The most recent version of the oncology medical home standards can be found on NCQA’s website.19

In this study, we evaluated a pilot of the patient-centered oncology care model in oncology practices in southeastern Pennsylvania to determine whether the patient-centered oncology standards improve patient experiences and quality and reduce emergency department (ED) and hospital utilization.

METHODS

We selected southeastern Pennsylvania as the demonstration location because Independence Blue Cross, an insurer with large market share, was willing to participate in the evaluation. Five practices consisting of 2 large academic medical centers, 2 private physician-owned practices, and 1 hospital-based outpatient department volunteered to pilot patient-centered oncology care. The pilot practices began implementing the standards in January 2014. Practices received implementation support, including monthly webinars and technical assistance, through December 2016.

Our evaluation consisted of a retrospective, pre-post study design with a concurrent nonrandomized control group of 18 local medical oncology practices for the utilization and patient experience outcomes (Table 1). The comparison practices were similar in size and ownership, were located in the same community, and participated in the payer network. Compared with comparison practices, the population served by the pilot practices was younger, was more likely to be male and non-White, had higher average risk scores, was less likely to have common comorbid conditions, and had a somewhat different profile of cancer types (Table 2).

This project was reviewed and approved by the Chesapeake Research Review Inc Institutional Review Board (IRB) and IRBs at 3 of the participating sites.

Measures and Analysis

Utilization. We identified 4 project time periods for the utilization analysis. “Baseline” occurred before the project began, from August 2011 to July 2013; “start-up” occurred from August 2013 to June 2014, when the pilot practices prepared for and began implementation; “intervention” occurred from July 2014 to December 2015, during which the practices continued implementing change; and “follow-up” occurred post implementation, from January to July 2016. For each of these 4 time periods, we identified a cross-section of patients attributed to a pilot or comparison practice. Specifically, for each time period, we used National Provider Identifiers and Tax Identification Numbers to identify all patients in the payer network who had an evaluation and management (E&M) claim at any medical oncology provider during that time period and then excluded patients without a claim at a pilot or comparison practice. We attributed these patients to a pilot or comparison practice if they had an E&M claim for which the performing provider was in the practice and for whom the practice provided all services, a majority of services, or a plurality of services. Plurality was defined by count of E&M visits, with ties broken by the most recent E&M visit date of service relative to other oncology practices. Most patients were exclusive with the index practice (ie, had no E&M claims with any other oncology practice during the time period).

We calculated unadjusted rates of all-cause hospitalizations, all-cause ED visits, and primary care provider (PCP) and specialist office visits per patient per month in each of the 4 study time periods for the pilot and comparison groups. We used a difference-in-differences (DID) regression model with fixed effects for practices to estimate the effects of exposure to the intervention on utilization. Generalized estimating equations with robust standard errors were used to account for heteroscedasticity, autocorrelation, and clustering of patients within practices.20 The dependent variable in the model was the utilization rate in the time period of interest. Therefore, the effects of the intervention were represented by the coefficient estimates for project time period interacted with status (pilot or comparison). The models also controlled for practice fixed effects and a Cotiviti (formerly Verisk Health) DxCG risk score estimating the cost of underlying illness burden.21 We considered 2-tailed values < .05 significant. We conducted sensitivity analyses to assess whether results differed based on the population definition: all continuously enrolled patients, patients with a history of cancer since 2009, and patients in active treatment.

Patient experience.We surveyed patients pre- and post intervention who had recently received chemotherapy using version 2.0 of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Cancer Care Survey; it had 5 composites and an overall rating of drug therapy treatment team.22,23 For the preintervention survey sample, we attributed patients who had a claim for chemotherapy at pilot and comparison practices from January through June 2014 for the early intervention and for the postintervention time period of January through June 2016. We also added patients who were eligible for the preintervention survey to the postintervention survey sample.

Both surveys were in the field for 10 weeks, starting with an initial mailing, followed by reminder telephone calls and then a subsequent round of mailing and reminder phone calls. We removed ineligible patients from the sample frame after the mailing, including patients who were deceased, had an undeliverable address, or were no longer insured at the time the survey was administered (preintervention response base, 2304; postintervention response base, 1788). We received 715 preintervention responses and 437 postintervention responses for response rates of 31% and 24%, respectively. We removed patients who had skipped relevant survey items (12 patients each at preintervention and post intervention) and patients who indicated that they did not receive drug therapy for cancer at the attributed pilot or comparison practice during the previous 6 months (97 patients at preintervention and 165 patients at post intervention). After removing ineligible patients from the response base, there were 606 valid preintervention survey responses (175 pilot, 431 comparison) and 260 postintervention survey responses (81 pilot, 179 comparison).

We calculated descriptive data on the CAHPS survey demographic variables (Table 2) and composite scores on a 0 to 100 scale using proportional scoring and the summated rating method based on the CAHPS macro.24 This method calculates the mean responses to each item, after transforming each response to a 0 to 100 scale (100 representing the most positive response on any given item response scale; 0 representing the least positive). For example, on a yes/no response scale, if “yes” represents the most positive response, then yes is equal to 100 and no is equal to 0; on an always/usually/sometimes/never response scale, if “always” represents the most positive response, then always is equal to 100; “usually,” 67; “sometimes,” 33; and “never,” 0. A higher score means that practices were rated more positively for care on that item. To estimate the effect of exposure to the intervention, we used a DID model with fixed effects for practices. The dependent variables included the 5 survey composites and an overall rating. The effects of the intervention were represented by the coefficient estimates for survey time period (pre- or post) interacted with status (pilot or comparison). Covariates included standard CAHPS case-mix variables (age, education, self-reported health status, sex, race, ethnicity, and help with completing survey) and nonresponse weighting.

Quality.We used 31 measures from the American Society of Clinical Oncology’s Quality Oncology Practice Initiative (QOPI) to evaluate improvement in quality, including 25 core measures for patients receiving chemotherapy with new cancer diagnoses and 6 palliative care measures for patients with new advanced cancer diagnoses. The baseline sample included patients with a new cancer diagnosis from August 2011 through August 2013, and the follow-up sample included patients with a new cancer diagnosis from July 2015 through June 2016. For the core measures, pilot practices reported on a total sample of 308 patients at baseline and 349 patients at follow-up. For the palliative care measures, they reported on a total of 153 patients at baseline and 139 patients at follow-up. Demographic data for the quality measure patient sample are included in Table 3.

For the core and palliative care measures, we calculated mean, minimum, and maximum pilot practice performance rates. We calculated the percentage point difference in average performance rates across pilot practices, and we compared average pilot practice performance rates with national and regional benchmark data for all practices in the United States and all practices in HHS Region 3 (which includes Pennsylvania). There were no available benchmark data for the palliative care measures because these measures were not yet included in the regular QOPI reporting program.

RESULTS

Utilization

There were no statistically significant (P < .05) DIDs between the pilot and comparison groups on all-cause hospitalizations, all-cause ED visits, or PCP office visits (Table 4). Among patients with active chemotherapy, the unadjusted rate of all-cause hospitalization was 0.05 per member per month (PMPM) for the pilot practices compared with 0.05 PMPM for the comparison group; at follow-up, the rates were 0.10 and 0.08, respectively, and the adjusted DID was –0.03 (95% CI, –0.27 to 0.21). The pilot group was associated with an increase in specialist visits from baseline to intervention period relative to comparison practices. The unadjusted rates of specialty visits were 0.93 PMPM for the pilot group and 0.98 for comparison group at baseline; at follow-up, the rates were 1.4 and 1.3, respectively. The adjusted DID of 0.12 (95% CI, 0.05-0.19) was significant (P = .002). The sensitivity analysis results were fairly consistent and nonsignificant across all 3 patient groups: those in active treatment, continuously enrolled patients, and patients with a history of cancer since 2009.

Patient Experience

Nonresponse analyses showed that respondents were significantly more likely to be older and White and also more likely to have more consistent insurance coverage, coronary artery disease, and a recent chemotherapy visit. In the postintervention group, respondents were also more likely to have lower DxCG risk scores. Respondents and nonrespondents did not differ with regard to sex, education, or presence of other comorbid conditions.

The DID analysis showed mixed results of the pilot on patient experiences (Table 5). Pilot participation was significantly associated with greater improvement in scores on 1 composite, shared decision-making, where the pilot score improved from 76.6 to 83.2 vs a change from 77.9 to 79.7 in the comparison group, yielding a DID result of 4.03 (95% CI, 0.86-7.21). In contrast, the comparison group had greater improvement on 2 composites, access (–6.36 DID; P < .001) and exchanging information (–4.25 DID; P = .013). There were no statistically significant differences in affective communication, patient self-management, or the overall rating. Detailed results for items comprising the composites suggest items that contribute to the overall findings (not shown). Notably, the pilot practice score on “asked for patient opinion about treatment choices” increased from 79.4 to 90.1 for the pilot group compared with an increase from 80.7 to 83.2 for the comparison group. Further, the pilot practice score on “gave patient clear instruction how to contact them after hours” declined from 89.5 to 77.1 from pre- to post intervention in the pilot group, compared with an increase from 87.8 to 90.8 for the comparison practices.

Quality

On average, the pilot practices improved performance between baseline and follow-up on 65% of the 25 core measures, had no improvement on 8% of measures, and declined in performance on 27% of measures (eAppendix [available at ajmc.com]). Measures that addressed care planning and assessment had the greatest improvements in performance, whereas performance declined on measures that assessed proper follow-up of identified problems. Mean pilot practice performance rates improved more than 10 percentage points and exceeded the national benchmark by follow-up on the care planning measures for documenting chemotherapy intent (80% at baseline vs 93% at follow-up) and discussing intent of chemotherapy with patients (73% vs 87%), discussing infertility risk (28% vs 63%) and fertility preservation options (53% vs 64%), and documenting chemotherapy treatment summary (41% vs 56%) and providing summary to patient (6% vs 37%).

Mean performance rates improved more than 10 percentage points for each of the symptom assessment palliative care measures for patients with advanced cancer. Similarly, the pilot practices improved on the measures for assessing pain and emotional well-being within the first 2 visits for patients with a new diagnosis of cancer; however, they had lower performance on the measures for addressing problems identified during pain and emotional well-being assessments.

DISCUSSION

Oncology practices participating in this patient-centered care pilot demonstrated an increase in specialist visits, no impact on hospitalizations and ED use, and mixed results for patient experiences, despite improvements in several patient-centered processes. These results are consistent with early studies of the primary care PCMH, which showed that financial incentives are needed to drive improvement in patient experiences and reductions in utilization and that time is required for outcomes to improve. The increase in specialty visits could indicate that the pilot practices were managing patients more closely, which is the intention of patient-centered care and better symptom management. However, more time may be needed to see a reduction in ED and hospital use. Also, practices may be affiliated with hospitals in which the incentive to reduce use of these services is dampened. Higher levels of care coordination require more staff and documentation resources, and neither this pilot project nor payers offered pilot practices increased reimbursement fees for implementing the patient-centered care model. With respect to the mixed results for patient experiences, one explanation could be that pilot practices had only partially demonstrated implementation of certain standards. For instance, comparison practices had greater improvement than pilot practices on the access composite, and we learned that at follow-up, the pilot practices had not implemented all standards related to care access, including providing timely clinical advice before and after office hours.

There are several other explanations for the mixed results. Throughout the pilot, many of the practices were undergoing upgrades to electronic health record (EHR) systems and changes in ownership or organizational structure, which key informants reported may have disrupted care and affected patient experience. In addition, the practices faced barriers in fully implementing the patient-centered oncology care model due to a lack of resources. This intervention did not include a financial payment strategy, and practices did not receive any payment changes from payers to help transform or manage patients. In contrast, recent studies have shown that where there has been greater investment in systems for population management or alternative payment strategies, there have been decreases in costs and utilization and increases in quality of care and patient satisfaction.18,25-30 Some of these studies did not evaluate changes in patient experience or quality but focused only on utilization and cost.

Limitations

This study has several limitations. The intervention was not randomized, and practices that volunteered to participate differed somewhat from the comparison practices on utilization outcomes from baseline to the start-up period. We did not have information on the implementation of patient-centered standards or quality measure performance for the comparison sites. Thus, participating and comparison practices might have differed on unobserved characteristics that could bias our estimates of intervention effects (eg, volunteering practices might have more engaged leadership that could drive observed differences that our models would erroneously attribute to the intervention; changes in EHR systems in the pilot or the comparison sites may have affected observed differences). A second limitation is that we did not perform a test of parallel trends. With only 2 years of baseline data and a small sample size, we had concerns that a parallel trends test would be underpowered and thus uninformative. A third limitation involves concerns about ceiling effects on the patient experience survey and the ability of oncology practices to improve on the composite scores over time, although there is evidence demonstrating the use of CAHPS scales for assessing and improving quality.31,32

Finally, the approach we used to attribute patients to pilot and comparison practices to establish which practices should be accountable for patients’ care is one that is commonly used, but we found that it was imperfect and the patterns of attribution changed over time. The percentage of survey respondents who denied receiving care at the attributed practice was higher at the second survey, and the number of patients attributed to the pilot practice decreased between baseline and follow-up. We found few claims for patients in 1 of the pilot practices, which was due to the practice billing chemotherapy through the affiliated hospital rather than through the medical oncology practice, and we were unable to separate the practice patient population claims from the hospital patient population claims. In addition, Independence Blue Cross made changes to the provider identification system between the baseline and follow-up period, which may have affected the number of patients attributed to practices in the follow-up utilization and survey samples. Any imperfections in attribution would constitute error in the main identification variable, which would bias any effect estimates of the intervention toward the null hypothesis (because the error term of the regression becomes correlated with the measurement error in the independent variable itself).

CONCLUSIONS

Oncology practices participating in this patient-centered care pilot demonstrated improvement in quality on several patient-centered processes, particularly symptom assessment and care planning, but patient experiences showed mixed results. Use of specialty care increased, but there was no impact on hospitalizations or ED use. These findings are consistent with early evaluations of the PCMH in primary care.33 This was the first such intervention to be tried and evaluated in oncology practices. Future research is needed to evaluate different oncology practice interventions and identify effective approaches for improving patient-centered outcomes in oncology and among subgroups of patients.

Acknowledgments

The authors thank Shelley Fuld Nasso, MPP, National Coalition for Cancer Survivorship; Ellen Stovall, National Coalition for Cancer Survivorship; John Sprandio, MD, Oncology Management Services; Susan Tofani, MS, Oncology Management Services; Patti Larkin, RN, MSN, American Society for Clinical Oncology; and Johann Chanin, RN, MSN, consultant, for their guidance throughout this study.

Author Affiliations: National Committee for Quality Assurance (LMR, MT, TP, SHS), Washington, DC; RAND Inc (MF), Boston, MA; Independence Blue Cross (AS-M), Philadelphia, PA.

Source of Funding: Research reported in this paper was funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (IH-12-11-4383). The statements in this paper are solely the responsibility of the authors and do not necessarily represent the views of PCORI, its Board of Governors, or its Methodology Committee.

Author Disclosures: Ms Roth’s and Dr Scholle’s institution, the National Committee for Quality Assurance, recognizes practices as oncology medical homes. Since 2016, Dr Friedberg has received financial support for research from the Agency for Healthcare Research and Quality, American Board of Medical Specialties Research and Education Foundation, American Medical Association, Center for Medicare & Medicaid Innovation, Centers for Medicare & Medicaid Services, Cedars-Sinai Medical Center, Commonwealth Fund, Milbank Memorial Fund, National Institute on Aging, National Institute on Drug Abuse, National Institute of Diabetes and Digestive and Kidney Diseases, National Institute on Minority Health and Health Disparities, Patient-Centered Outcomes Research Institute, and Washington State Institute for Public Policy. Since 2016, Dr Friedberg has received payments from Consumer Reports for consulting services. Dr Friedberg also has a clinical practice in primary care at Brigham and Women’s Hospital and thus receives payment for clinical services, via the Brigham and Women’s Physician Organization, from dozens of commercial health plans and government payers, including but not limited to Medicare, Medicaid, Blue Cross and Blue Shield of Massachusetts, Tufts Health Plan, and Harvard Pilgrim Health Plan, which are the most prevalent payers in Massachusetts. Dr Friedberg also receives compensation from Harvard Medical School for tutoring medical students in health policy. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (LMR, MT, MF, AS-M, SHS); acquisition of data (LMR, TP, AS-M); analysis and interpretation of data (LMR, MT, TP, MF, AS-M); drafting of the manuscript (LMR, MT, MF, AS-M, SHS); critical revision of the manuscript for important intellectual content (LMR, MF, SHS); obtaining funding (SHS); administrative, technical, or logistic support (TP); and supervision (LMR, SHS).

Address Correspondence to: Lindsey M. Roth, MPP, National Committee for Quality Assurance, 1100 13th St NW, Third Floor, Washington, DC 20005. Email: roth@ncqa.org.

REFERENCES

1. Hoyert DL, Xu J. Deaths: preliminary data for 2011. Natl Vital Stat Rep. 2012;61(6):1-51.

2. National Center for Health Statistics. Health, United States, 2015: with special feature on racial and ethnic health disparities. CDC. 2016. Accessed May 17, 2017. https://www.cdc.gov/nchs/data/hus/hus15.pdf

3. SEER cancer statistics review, 1975-2012. Surveillance, Epidemiology, and End Results Program. Updated November 18, 2015. Accessed May 17, 2017. http://seer.cancer.gov/csr/1975_2012/

4. Institute of Medicine. Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis. The National Academies Press; 2013.

5. Ayanian JZ, Zaslavsky AM, Guadagnoli E, et al. Patients’ perceptions of quality of care for colorectal cancer by race, ethnicity, and language. J Clin Oncol. 2005;23(27):6576-6586. doi:10.1200/JCO.2005.06.102

6. Ayanian JZ, Zaslavsky AM, Arora NK, et al. Patients’ experiences with care for lung cancer and colorectal cancer: findings from the Cancer Care Outcomes Research and Surveillance Consortium. J Clin Oncol. 2010;28(27):4154-4161. doi:10.1200/JCO.2009.27.3268

7. McInnes DK, Cleary PD, Stein KD, Ding L, Mehta CC, Ayanian JZ. Perceptions of cancer-related information among cancer survivors: a report from the American Cancer Society’s studies of cancer survivors. Cancer. 2008;113(6):1471-1479. doi:10.1002/cncr.23713

8. Institute of Medicine. Cancer Care for the Whole Patient: Meeting Psychosocial Health Needs. The National Academies Press; 2008.

9. Institute of Medicine. Patient-Centered Cancer Treatment Planning: Improving the Quality of Oncology Care: Workshop Summary. The National Academies Press; 2011.

10. Lee CN, Chang Y, Adimorah N, et al. Decision making about surgery for early-stage breast cancer. J Am Coll Surg. 2012;214(1):1-10. doi:10.1016/j.jamcollsurg.2011.09.017

11. Zikmund-Fisher BJ, Couper MP, Singer E, et al. Deficits and variations in patients’ experience with making 9 common medical decisions: the DECISIONS survey. Med Decis Making. 2010;30(5 suppl):85S-95S. doi:10.1177/0272989X10380466

12. Dow LA, Matsuyama RK, Ramakrishnan V, et al. Paradoxes in advance care planning: the complex relationship of oncology patients, their physicians, and advance medical directives. J Clin Oncol. 2009;28(2):299-304. doi:10.1200/JCO.2009.24.6397

Related Videos
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.