• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Development of a Tailored Survey to Evaluate a Patient-Centered Initiative

Publication
Article
The American Journal of Managed CareFebruary 2018
Volume 24
Issue 2

We developed short patient experience surveys that were sensitive to our broad quality initiative, were meaningful and acceptable to patients, and had good response rates.

ABSTRACT

Objectives: Patient-centered care initiatives have proliferated, but assessing their effectiveness requires measures tailored to their likely effects. In this article, we describe the development and pilot testing of patient surveys used to assess change in patients’ cancer care experiences over time in response to a patient-centered care initiative.

Study Design: Prospective case series.

Methods: Domains of patient-centered care were informed by the goals of the initiative and a review of existing tools. Items were selected and modified from 6 domains of validated or semivalidated instruments. Items were piloted with patients with cancer in waiting room settings to further assess the relevance and clarity of items, whether important concepts were missing, and acceptability regarding place and timing of the surveys and to estimate baseline top box scores (percentage of patients scoring an item the highest quality level) to minimize likely ceiling effects. The instrument was then administered to a consecutive sample of Stanford Cancer Center patients. Baseline item responses, Cronbach’s alpha, and response bias were estimated.

Results: Items were modified based on patient feedback, top box scores, and reassessment of the domains. Over 6 months, 11,273 patients were surveyed, with a 49.7% response rate. Baseline top box scores ranged from 41.7% to 75.0% for any given item. Reliability and internal consistency were high for all domains (Cronbach’s alpha ≥0.80) except for the access domain.

Conclusions: We developed reliable instruments to evaluate the essential elements of a patient-centered care initiative at an academic medical center, which minimized patient burden and maximized the response rate.

Am J Manag Care. 2018;24(2):e37-e44Takeaway Points

  • Assessing the effectiveness of patient-centered initiatives requires measures tailored to their likely effects.
  • Our method to develop a patient-relevant sensitive survey involved identification and modification of existing instruments followed by an iterative process of pilot testing with patients, measuring top box responses, and modifying further as needed.
  • This process resulted in 4 unique short instruments with high response rates and internal validity that measured patients’ cancer care experiences on domains important to them.
  • The methods could be applied in any context, resulting in more meaningful data than using a preexisting validated tool that does not meet the needs of the local community or situation.

Assessing care through the eyes of patients and their families has become increasingly important. A 2001 Institute of Medicine report, “Crossing the Quality Chasm: A New Health System for the 21st Century,” set patient-centered care as an explicit goal and called for measures to support that goal.1 Since then, significant efforts have been implemented to better understand the key healthcare issues patients face, their experiences with their healthcare, and ways to measure those experiences. These have resulted in development, broad implementation, and public reporting of routine patient surveys, such as the Consumer Assessment of Healthcare Providers and Systems (CAHPS), which is specific to hospitalizations2 and generic ambulatory care.3

Existing tools assessing patient satisfaction with their healthcare experiences have several practical limitations, including ceiling effects, poor responsiveness to interventions, lack of specificity to condition/disease, and infeasible administration times.4,5 Ceiling effects, which occur when a high proportion of patients give the optimal score for that question, limit variation and make items insensitive to improvements. Furthermore, the general domains of experience most commonly included in surveys may miss either the disease or the targets of improvement efforts, also limiting their utility to detect change.6,7 For instance, healthcare concerns and experiences of patients receiving complex care for life-threatening chronic conditions are different from those with acute conditions.

In September 2013, Stanford Cancer Center began a transformation initiative with the overarching goal of improving patient-centered coordinated care across the cancer care continuum, with an emphasis on improving the patient experience. Because the key goal was to significantly improve the experiences of patients and their families, the interventions were designed to target and improve aspects of care that may prove frustrating. Evaluation is an important component of the 5-year initiative. At the time of developing the evaluation plan, we found no validated cancer-specific instruments relevant across the entire care continuum, which was needed to cover the breadth of the changes to be introduced. We therefore sought to develop a feasible cancer-specific survey instrument that addressed the key domains of the transformation. The surveys are 1 component of a large complex mixed-methods evaluation effort and are therefore not intended to cover every type of patient and family experience or concern.

In this article, we describe our successful approach to tailoring existing instruments to the task. Beyond covering domains targeted by the transformation, we also tested the instrument in partnership with patient leadership to ensure relevance, assessed variation to minimize likely ceiling effects, tested internal reliability of domain-specific items, and developed an administration strategy that split the instrument into short feasible parts. Methods used to modify the existing instrument and results from our initial pilot and 6-month baseline data collection are presented. This project was given a quality improvement nonresearch designation by the Stanford Institutional Review Board.

METHODS

Setting

The Stanford Cancer Center is an academic center that provides care at more than 75,000 annual patient visits. Patient care is provided by oncologists, advanced practice providers, nurse coordinators, and associated administrative staff. Services include oncology consultations, treatment planning and delivery (eg, surgery, radiation therapy, chemotherapy, cyberknife, bone marrow transplant), supportive care, palliative care, and survivorship care.

Survey Design

The transformation included more than 13 interventions designed to address various aspects of quality along the patient care continuum, with plans to increase the number over time. Interventions were targeted at improving multiple dimensions of patient-centered care.

Goals for the final survey were to develop a tool that: 1) contains items relevant to the transformation and patients, 2) is responsive, and 3) is acceptable to patients and easy to complete (ie, results in high response rates). Initially, we conducted a brief review of existing validated or semivalidated surveys related to “patient experience” and/or “quality of care.” In order to tailor the instrument to our needs, priority was given to surveys that had been developed with patients with cancer, were generic to all cancers, and included domains expected to be impacted by the transformation. Two such instruments were identified: the CAHPS Survey for Cancer Care and the National Research Corporation (NRC) Picker survey. At the time, the CAHPS Survey for Cancer Care had completed the first stage of validation and was in the process of initiating the second, and final, validation phase. Both surveys were developed based on input from patient focus groups to identify quality domains relevant to patients with cancer and were generic to all patients with cancer, thus meeting our criteria.8,9 The proprietary nature of the NRC Picker survey, however, made it less accessible; thus, our initial focus was on identifying strengths and limitations of the partly validated CAHPS for Cancer Care tool for our application.

We selected the following CAHPS for Cancer Care domains9 for development: Affective Communication (4 items), Shared Decision-Making (4 items), Cancer Communication (4 items), and Access (6 items). Important domains that were tested but ultimately not recommended9 included Family and Friends and Coordination. Improving coordination was an important goal of the transformation, so rather than excluding it, items for the Coordination domain for the current study were adapted from a 2009 report from New South Wales, Australia, that had used the NRC Picker survey to assess change in patient satisfaction over 2 time points.10 The other domains included in the report were also reviewed but ultimately not included because they were either: 1) very similar to items/domains we had already included or 2) specific to a particular environment (ie, inpatient experience), treatment (eg, surgery), or time period (eg, initial diagnosis), none of which aligned with our broad project goals.

Including the patient perspective in the development of the survey was an important goal; therefore, we partnered with Stanford Cancer Center’s Patient Family Advisory Committee (PFAC), which identified friends and family as the ones best able to digest information during visits and frequently responsible for coordinating patients’ care. After reviewing the items that had been selected for the pilot phase, PFAC members suggested changing “you” to “you and your family” in several questions in the Affective Communication domain that were related to listening, discussing, and answering questions with the care team. They also suggested adding an item related to whether the patient and family had been given all of the information they wanted, with options ranging from “as much as they wanted” to “questions avoided” to try to understand the degree to which the expectations and information needs of patients and family members were met; this item was thought to fit best in the Cancer Communication domain.

The second goal, creating a responsive tool, was addressed by creating a common response scale with at least 4 levels, rather than mixing 2-, 3-, and 4-point scales in 1 instrument as in the original items adapted. Scales were modified during the pilot to make items more sensitive.

To address the third goal, acceptability and ease of completion, we considered convenience for patients in the time and place of survey administration and the instrument length. Collective feedback from PFAC members, clinic operations, administration, and the information technology group indicated that survey administration on paper at the time of clinic visit check-in would be the most acceptable; the long-term goal included making an optional electronic version. Surveys were only administered at outpatient clinic visits in which patients met with their oncologists, due to both the logistical complications of tracking paper surveys in multiple clinic settings and the potential compromised nature of patients during diagnostic testing and treatment visits. Items were designed to address care received in the last 3 months, rather than the most recent visit or collective visits over a longer period of time; therefore, only patients receiving relatively frequent care were eligible. To keep the surveys short, minimize patients’ time, and maximize response rate, the items were split across 4 short surveys, with each capturing 1 or 2 selected domains; the 4 unique nonoverlapping surveys were then piloted.

Pilot Test

A pilot study was conducted over a 3-month period in fall 2014 in 2 phases. The goals of the pilot were to assess the surveys regarding: 1) relevance/clarity to patients and 2) acceptability regarding place/time and 3) to estimate baseline top box scores, the percentage of patients scoring an item consistent with the highest quality possible. The first phase was conducted over a 3-week period and included all outpatient cancer clinics that involved oncologist—patient consultations, for a total of 7 physical clinics representing 13 tumor groups. Each clinic distributed surveys for 1 or 2 half-days. Patients randomly received 1 of 4 paper surveys at check-in and completed it before their appointment. In-person interviews were conducted on an arbitrary sample to obtain patient input on the timing of survey receipt, relevance of questions, and whether important issues related to their cancer care were missing. Items were modified based on patient feedback prior to launching the larger second phase of the pilot.

The second phase was conducted daily over an approximate 2-month period in all cancer clinics to further evaluate the items and estimate top box scores. Items with top box scores higher than 80% were considered to have ceiling effects (ie, were unlikely to improve) and were therefore dropped. Patient feedback and top box scores were assessed monthly to inform version change with the following options: items left as is, simplified while maintaining the intent of the original item, or item removed. Internal consistency of the final item sets for each domain was examined using Cronbach’s alpha.

Baseline Data Analysis

The final versions of the 4 surveys were officially launched in the clinics on December 2, 2014. All 4 surveys were given to each clinic and sorted in a random order each day, such that all patients had a 25% chance of receiving a particular survey domain(s); patients were given the survey on the top of the survey pile at the time of their arrival to the clinic. Patients were eligible if they had had 1 or more cancer-related visits in the 3 months prior to their current visit. The first question on each survey was, “Have you had an appointment at the Stanford Cancer Center in the last 3 months?” in order to screen out ineligible patients.

Response rates and top box scores were calculated for the first 6 months of survey implementation, December 2014 through May 2015. In order to calculate response rates, data were obtained from the electronic health record database. All patients with eligible visits were identified from this database and linked with the surveys to determine response rates. Demographics, clinical characteristics, and cancer care service utilization were also obtained. Responders and nonresponders were compared on several demographic and clinical characteristics. Top box and average scores were calculated for each survey item and domain and by demographic and clinical characteristics of responders. Items worded in a negative way were reverse coded for analysis such that the best response was a 5 and the worst response was a 1.

RESULTS

Pilot Study

Table 1 lists the domains, items, and response options for each survey that was assessed in the initial phase of the pilot. All items were prefaced with the phrase: “In the past 3 months” or “In the past 3 months, how often did your Stanford Cancer Center healthcare team (all doctors, nurses, and staff related to your cancer care at Stanford).”

Brief interviews were conducted with 83 patients and included a range of ages (30s through 70s) and racial groups (primarily Asian and Caucasian) as part of the initial phase of the pilot. Due to the brevity of the interviews, all patients were not asked all questions. Sixty-eight of 79 (86%) patients said that completing the surveys while waiting for their appointment was convenient and they preferred it over receiving a survey via mail or email. Forty-nine of 58 (84%) patients were satisfied with the relevance, clarity, and length of the surveys, which took patients less than 3 minutes to complete. Additional items suggested by 7 patients included access to chemotherapy beds (2), lab results (4), and surgery (1). Twenty-nine of 49 (59%) patients said they would be willing to complete surveys every 3 months or more frequently, including every visit. Just 1 patient said they would only be willing to complete it once.

During the second phase of the pilot study, 3 versions of the surveys were sequentially developed and tested based on top box scores and feedback from patients. Patient feedback was obtained via comments they were invited to write on the back of the survey. Table 2 [part A and part B] shows the final surveys that were developed, their corresponding response options, item-level top box scores, and domain-level Cronbach’s alpha from the 2-month pilot study. Response options were modified from an evenly distributed 4-point scale to a 5-point scale, with more positive response options near the top: “always,” “almost always,” “usually,” “sometimes,” and “never” in order to reduce likely ceiling effects. The response option “doesn’t apply to me” was removed. Top box scores ranged from 39.7 to 84.1. Although our principle was to remove items with top box scores above 80, 1 item was kept regarding timeliness of radiotherapy as a sort of control because the radiation therapy department was known to have high efficiency; we therefore expected this score to stay high throughout the evaluation. The next highest top box score was 75. Cronbach’s alpha was high for the Communication, Coordination, Cancer Information, and Shared Decision-Making domains, at 0.98, 0.83, 0.85, and 0.9, respectively, but was a bit lower, at 0.5, for the Access domain. The last 3 items in the Access domain were considered to be independent items because they were specific to subsets of patients who had received tests, chemotherapy, and/or radiation therapy, respectively, at the time of the survey. These items were kept even though they only applied to a subset of patients because patient feedback during the pilot study was that these waiting times are important to them and interventions target them.

Baseline Results

Table 2 also shows the top box, mean (SD), Cronbach’s alpha, and percentage missing for each question or domain, as applicable, for the 6-month baseline period. The baseline data represent approximately 2000 patient responses for each survey. The Cronbach’s alpha for each domain is very similar or the same for each domain as it was in the pilot. The top box scores for each item were within a couple percentage points of the pilot, with the exception of 2 items, one in which the top box was about 5 percentage points higher than the pilot and one that was about 8 percentage points lower. With the exception of the last 3 items in the Access domain, which were expected to apply to a subset of patients, item missingness was less than 8%.

There were 11,273 patients who had at least 1 eligible visit during the baseline data collection period, 5607 (49.7%) of whom completed at least 1 survey. Table 3 [part A and part B] shows the demographic and clinical characteristics of the patients who completed and did not complete at least 1 survey. Demographic characteristics were fairly similar across the 2 populations; however, those who completed at least 1 survey were slightly more likely to be female (55% vs 52%), English-speaking (89% vs 86%), Caucasian (60% vs 57%), and aged 60 to 79 years (48% vs 45%) than those who did not complete any surveys at an eligible visit. Patients who completed at least 1 survey were also more likely to have a cancer diagnosis (85% vs 74%) and to have been going to the clinic for 6 to 24 months (35% vs 26%) than nonrespondents.

DISCUSSION

The Stanford Cancer Patient Experience Surveys are unique short instruments that measure patient experience on 5 quality domains important to patients with cancer. Their development leveraged learnings from the development of other surveys specific to patients with cancer but were optimized by deleting questions for which the center was performing well prior to implementation of interventions and by using a common 5-point scale weighted toward positive responses to minimize ceiling effects. The consistency of scores across subgroups in both pilot and baseline periods and the consistent and high Cronbach’s alpha suggest high internal reliability and stability of the items and domains. The stability of the scores across several demographic groups prior to intervention is a positive sign that they will be effective in detecting meaningful change. Response rates were much higher than those of typical patient satisfaction surveys, such as Press-Ganey, which have been reported to be less than 20%.11 A higher response rate translates to lower degree of response bias. We believe the higher response rate is due to both when and where patients receive the surveys (at clinic check-in) as well as their brevity: no more than 9 questions compared with other patient experience surveys with 30 to more than 60 questions and taking roughly 3 minutes compared with approximately 20 to 30 minutes, respectively.

Limitations

Limitations to these surveys are that they are not completely validated. Validated tools, such as Cancer CAHPS, are important to apply when comparing across environments and benchmarking. External validation is not critical, however, when the need is for a tool that will be sensitive to a unique local environment. The methods described herein to adapt existing validated and semivalidated instruments to our local environment could be applied anywhere, resulting in more meaningful data than using a preexisting validated tool that does not meet the needs of the local community/situation.

Another limitation is that currently the surveys are only available on paper in English and, although testing was done during the pilot that included patients from different cultural backgrounds, further testing is warranted to ensure cross-cultural comprehension and sensitivity. We plan to expand the surveys’ accessibility by translating them to the most common 3 languages in a culturally sensitive manner among our patient population and by expanding their dissemination to include email or via our patient portal. Although consensus was fairly strong during the pilot that receiving surveys on paper at clinic check-in was preferable to email, in the interest of being patient centered, our goal was to offer choice in mode of survey. Findings from at least 1 study have shown, however, that the preferred mode does not always translate to a higher response rate.12

CONCLUSIONS

We developed a 4-part instrument to evaluate patient experience on 5 domains of care tailored to patients with cancer and our interventions. These tailored instruments have good domain internal reliability and variability at the top range. We also developed a waiting room—based administration protocol that minimizes the response burden by randomly collecting brief domain-specific subsurveys, yielding higher response rates than generally reported in the literature. This method of tailoring patient experience assessment to clinical conditions and interventions can serve as a model for similar future efforts.

Acknowledgments

The authors would like to thank the Stanford Cancer Institute leadership, in particular, Sri Seshadri, Eben Rosenthal, and Bev Mitchell, for their continual support and feedback. They would also like to thank all those who have helped make the patient experience survey process successful, in particular, Gurpreet Ishpuniani, Patricia Falconer, and the many cancer center front desk staff and their managers.Author Affiliations: Division of Primary Care and Population Health, Stanford University School of Medicine (MW, FH-S, SMA), Stanford, CA; Center for Innovation to Implementation (Ci2i), VA Palo Alto Health Care System (SMA), Palo Alto, CA.

Source of Funding: This project was supported by an anonymous donation to the Stanford Cancer Institute to improve patient-centered quality cancer care.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (MW, SMA); acquisition of data (MW, FH-S); analysis and interpretation of data (MW, FH-S); drafting of the manuscript (MW, FH-S); critical revision of the manuscript for important intellectual content (MW, SMA); statistical analysis (MW, FH-S); provision of patients or study materials (MW); administrative, technical, or logistic support (MW); and supervision (MW).

Address Correspondence to: Marcy Winget, PhD, Stanford University School of Medicine, 1265 Welch Rd, MSOB #X214, Stanford, CA 94305. Email: mwinget@stanford.edu.REFERENCES

1. Institute of Medicine Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.

2. Hospital Compare. Medicare.gov website. medicare.gov/hospitalcompare/search.html. Accessed October 3, 2017.

3. CAHPS Clinician & Group Visit Survey 2.0. Agency for Healthcare Research and Quality website. ahrq.gov/cahps/surveys-guidance/cg/visit/index.html. Published 2016. Updated August 2017. Accessed October 3, 2017.

4. Hirsch O, Keller H, Albohn-Kühne C, Krones T, Donner-Banzhoff N. Pitfalls in the statistical examination and interpretation of the correspondence between physician and patient satisfaction ratings and their relevance for shared decision making research. BMC Med Res Methodol. 2011;11:71. doi: 10.1186/1471-2288-11-71.

5. Davies E, Shaller D, Edgman-Levitan S, et al. Evaluating the use of a modified CAHPS survey to support improvements in patient-centred care: lessons from a quality improvement collaborative. Health Expect. 2008;11(2):160-176. doi: 10.1111/j.1369-7625.2007.00483.x.

6. Dell-Kuster S, Sanjuan E, Todorov A, Weber H, Heberer M, Rosenthal R. Designing questionnaires: healthcare survey to compare two different response scales. BMC Med Res Methodol. 2014;14:96. doi: 10.1186/1471-2288-14-96.

7. Malin JL, Ko C, Ayanian JZ, et al. Understanding cancer patients’ experience and outcomes: development and pilot study of the Cancer Care Outcomes Research and Surveillance patient survey. Support Care Cancer. 2006;14(8):837-848. doi: 10.1007/s00520-005-0902-8.

8. Peschel RE, Peschel E. Through the patient’s eyes: understanding and promoting patient-centered care. JAMA. 1994;271(2):155. doi: 10.1001/jama.1994.03510260087037.

9. Garfinkel S, Evensen C, Keller S, Frentzel E, Cowans T. Developing the CAHPS Survey for Cancer Care: Final Report Executive Summary. Silver Spring, MD: American Institutes for Research; July 12, 2013.

10. Cancer Institute NSW. New South Wales Cancer Patient Satisfaction Survey 2008. Sydney, Australia: Cancer Institute NSW; July 2009.

11. Tyser AR, Abtahi AM, McFadden M, Presson AP. Evidence of non-response bias in the Press-Ganey patient satisfaction survey. BMC Health Serv Res. 2016;16(a):350. doi: 10.1186/s12913-016-1595-z.

12. Garcia I, Portugal C, Chu LH, Kawatkar AA. Response rates of three modes of survey administration and survey preferences of rheumatoid arthritis patients. Arthritis Care Res (Hoboken). 2014;66(3):364-370. doi: 10.1002/acr.22125. 

Related Videos
Beau Raymond, MD
Judith Alberto, MHA, RPh, BCOP, director of clinical initiatives, Community Oncology Alliance
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
Will Shapiro, vice president of data science, Flatiron Health
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
dr robert sidbury
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
Will Shapiro, vice president of data science, Flatiron Health
Jonathan E. Levitt, Esq, Frier Levitt, LLC
Judy Alberto, MHA, RPh, BCOP, Community Oncology Alliance
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.