Currently Viewing:
The American Journal of Managed Care February 2018
Community Navigators Reduce Hospital Utilization in Super-Utilizers
Michael P. Thompson, PhD; Pradeep S.B. Podila, MS, MHA; Chip Clay, MDiv, BCC; Joy Sharp, BS; Sandra Bailey-DeLeeuw, MSHS; Armika J. Berkley, MPH; Bobby G. Baker, DMin, BCC; and Teresa M. Waters, PhD
Cost-Effectiveness of Collaborative Care for Depression and PTSD in Military Personnel
Tara A. Lavelle, PhD; Mallika Kommareddi, MPH; Lisa H. Jaycox, PhD; Bradley Belsher, PhD; Michael C. Freed, PhD; and Charles C. Engel, MD, MPH
Data Breach Locations, Types, and Associated Characteristics Among US Hospitals
Meghan Hufstader Gabriel, PhD; Alice Noblin, PhD, RHIA, CCS; Ashley Rutherford, PhD, MPH; Amanda Walden, MSHSA, RHIA, CHDA; and Kendall Cortelyou-Ward, PhD
ACA Marketplace Premiums and Competition Among Hospitals and Physician Practices
Maria Polyakova, PhD; M. Kate Bundorf, PhD, MBA, MPH; Daniel P. Kessler, JD, PhD; and Laurence C. Baker, PhD
Pricing of Monoclonal Antibody Therapies: Higher If Used for Cancer?
Inmaculada Hernandez, PharmD, PhD; Samuel W. Bott, BS; Anish S. Patel, BS; Collin G. Wolf, BS; Alexa R. Hospodar, BS; Shivani Sampathkumar, BS; and William H. Shrank, MD, MSHS
Leveraging Benefit Design for Better Diabetes Self-Management and A1C Control
Abiy Agiro, PhD; Yiqiong Xie, PhD; Kevin Bowman, MD; and Andrea DeVries, PhD
Currently Reading
Development of a Tailored Survey to Evaluate a Patient-Centered Initiative
Marcy Winget, PhD; Farnoosh Haji-Sheikhi, MS; and Steve M. Asch, MD, MPH
Impact of Telephonic Comprehensive Medication Reviews on Patient Outcomes
Evan A. DeZeeuw, PharmD; Ashley M. Coleman, PharmD; and Milap C. Nahata, PharmD, MS
Variation in Markups on Outpatient Oncology Services in the United States
Angela Park; Tim Xu, MD, MPP; Michael Poku, MD, MBA; James Taylor, MBBChir, MPH, MRCS(Eng); and Martin A. Makary, MD, MPH

Development of a Tailored Survey to Evaluate a Patient-Centered Initiative

Marcy Winget, PhD; Farnoosh Haji-Sheikhi, MS; and Steve M. Asch, MD, MPH
We developed short patient experience surveys that were sensitive to our broad quality initiative, were meaningful and acceptable to patients, and had good response rates.
ABSTRACT

Objectives: Patient-centered care initiatives have proliferated, but assessing their effectiveness requires measures tailored to their likely effects. In this article, we describe the development and pilot testing of patient surveys used to assess change in patients’ cancer care experiences over time in response to a patient-centered care initiative.

Study Design: Prospective case series.

Methods: Domains of patient-centered care were informed by the goals of the initiative and a review of existing tools. Items were selected and modified from 6 domains of validated or semivalidated instruments. Items were piloted with patients with cancer in waiting room settings to further assess the relevance and clarity of items, whether important concepts were missing, and acceptability regarding place and timing of the surveys and to estimate baseline top box scores (percentage of patients scoring an item the highest quality level) to minimize likely ceiling effects. The instrument was then administered to a consecutive sample of Stanford Cancer Center patients. Baseline item responses, Cronbach’s alpha, and response bias were estimated.

Results: Items were modified based on patient feedback, top box scores, and reassessment of the domains. Over 6 months, 11,273 patients were surveyed, with a 49.7% response rate. Baseline top box scores ranged from 41.7% to 75.0% for any given item. Reliability and internal consistency were high for all domains (Cronbach’s alpha ≥0.80) except for the access domain.

Conclusions: We developed reliable instruments to evaluate the essential elements of a patient-centered care initiative at an academic medical center, which minimized patient burden and maximized the response rate.

Am J Manag Care. 2018;24(2):e37-e44
Takeaway Points
  • Assessing the effectiveness of patient-centered initiatives requires measures tailored to their likely effects.
  • Our method to develop a patient-relevant sensitive survey involved identification and modification of existing instruments followed by an iterative process of pilot testing with patients, measuring top box responses, and modifying further as needed.
  • This process resulted in 4 unique short instruments with high response rates and internal validity that measured patients’ cancer care experiences on domains important to them.
  • The methods could be applied in any context, resulting in more meaningful data than using a preexisting validated tool that does not meet the needs of the local community or situation.
Assessing care through the eyes of patients and their families has become increasingly important. A 2001 Institute of Medicine report, “Crossing the Quality Chasm: A New Health System for the 21st Century,” set patient-centered care as an explicit goal and called for measures to support that goal.1 Since then, significant efforts have been implemented to better understand the key healthcare issues patients face, their experiences with their healthcare, and ways to measure those experiences. These have resulted in development, broad implementation, and public reporting of routine patient surveys, such as the Consumer Assessment of Healthcare Providers and Systems (CAHPS), which is specific to hospitalizations2 and generic ambulatory care.3

Existing tools assessing patient satisfaction with their healthcare experiences have several practical limitations, including ceiling effects, poor responsiveness to interventions, lack of specificity to condition/disease, and infeasible administration times.4,5 Ceiling effects, which occur when a high proportion of patients give the optimal score for that question, limit variation and make items insensitive to improvements. Furthermore, the general domains of experience most commonly included in surveys may miss either the disease or the targets of improvement efforts, also limiting their utility to detect change.6,7 For instance, healthcare concerns and experiences of patients receiving complex care for life-threatening chronic conditions are different from those with acute conditions.

In September 2013, Stanford Cancer Center began a transformation initiative with the overarching goal of improving patient-centered coordinated care across the cancer care continuum, with an emphasis on improving the patient experience. Because the key goal was to significantly improve the experiences of patients and their families, the interventions were designed to target and improve aspects of care that may prove frustrating. Evaluation is an important component of the 5-year initiative. At the time of developing the evaluation plan, we found no validated cancer-specific instruments relevant across the entire care continuum, which was needed to cover the breadth of the changes to be introduced. We therefore sought to develop a feasible cancer-specific survey instrument that addressed the key domains of the transformation. The surveys are 1 component of a large complex mixed-methods evaluation effort and are therefore not intended to cover every type of patient and family experience or concern.

In this article, we describe our successful approach to tailoring existing instruments to the task. Beyond covering domains targeted by the transformation, we also tested the instrument in partnership with patient leadership to ensure relevance, assessed variation to minimize likely ceiling effects, tested internal reliability of domain-specific items, and developed an administration strategy that split the instrument into short feasible parts. Methods used to modify the existing instrument and results from our initial pilot and 6-month baseline data collection are presented. This project was given a quality improvement nonresearch designation by the Stanford Institutional Review Board.

METHODS

Setting

The Stanford Cancer Center is an academic center that provides care at more than 75,000 annual patient visits. Patient care is provided by oncologists, advanced practice providers, nurse coordinators, and associated administrative staff. Services include oncology consultations, treatment planning and delivery (eg, surgery, radiation therapy, chemotherapy, cyberknife, bone marrow transplant), supportive care, palliative care, and survivorship care.

Survey Design

The transformation included more than 13 interventions designed to address various aspects of quality along the patient care continuum, with plans to increase the number over time. Interventions were targeted at improving multiple dimensions of patient-centered care.

Goals for the final survey were to develop a tool that: 1) contains items relevant to the transformation and patients, 2) is responsive, and 3) is acceptable to patients and easy to complete (ie, results in high response rates). Initially, we conducted a brief review of existing validated or semivalidated surveys related to “patient experience” and/or “quality of care.” In order to tailor the instrument to our needs, priority was given to surveys that had been developed with patients with cancer, were generic to all cancers, and included domains expected to be impacted by the transformation. Two such instruments were identified: the CAHPS Survey for Cancer Care and the National Research Corporation (NRC) Picker survey. At the time, the CAHPS Survey for Cancer Care had completed the first stage of validation and was in the process of initiating the second, and final, validation phase. Both surveys were developed based on input from patient focus groups to identify quality domains relevant to patients with cancer and were generic to all patients with cancer, thus meeting our criteria.8,9 The proprietary nature of the NRC Picker survey, however, made it less accessible; thus, our initial focus was on identifying strengths and limitations of the partly validated CAHPS for Cancer Care tool for our application.

We selected the following CAHPS for Cancer Care domains9 for development: Affective Communication (4 items), Shared Decision-Making (4 items), Cancer Communication (4 items), and Access (6 items). Important domains that were tested but ultimately not recommended9 included Family and Friends and Coordination. Improving coordination was an important goal of the transformation, so rather than excluding it, items for the Coordination domain for the current study were adapted from a 2009 report from New South Wales, Australia, that had used the NRC Picker survey to assess change in patient satisfaction over 2 time points.10 The other domains included in the report were also reviewed but ultimately not included because they were either: 1) very similar to items/domains we had already included or 2) specific to a particular environment (ie, inpatient experience), treatment (eg, surgery), or time period (eg, initial diagnosis), none of which aligned with our broad project goals.

Including the patient perspective in the development of the survey was an important goal; therefore, we partnered with Stanford Cancer Center’s Patient Family Advisory Committee (PFAC), which identified friends and family as the ones best able to digest information during visits and frequently responsible for coordinating patients’ care. After reviewing the items that had been selected for the pilot phase, PFAC members suggested changing “you” to “you and your family” in several questions in the Affective Communication domain that were related to listening, discussing, and answering questions with the care team. They also suggested adding an item related to whether the patient and family had been given all of the information they wanted, with options ranging from “as much as they wanted” to “questions avoided” to try to understand the degree to which the expectations and information needs of patients and family members were met; this item was thought to fit best in the Cancer Communication domain.

The second goal, creating a responsive tool, was addressed by creating a common response scale with at least 4 levels, rather than mixing 2-, 3-, and 4-point scales in 1 instrument as in the original items adapted. Scales were modified during the pilot to make items more sensitive.

To address the third goal, acceptability and ease of completion, we considered convenience for patients in the time and place of survey administration and the instrument length. Collective feedback from PFAC members, clinic operations, administration, and the information technology group indicated that survey administration on paper at the time of clinic visit check-in would be the most acceptable; the long-term goal included making an optional electronic version. Surveys were only administered at outpatient clinic visits in which patients met with their oncologists, due to both the logistical complications of tracking paper surveys in multiple clinic settings and the potential compromised nature of patients during diagnostic testing and treatment visits. Items were designed to address care received in the last 3 months, rather than the most recent visit or collective visits over a longer period of time; therefore, only patients receiving relatively frequent care were eligible. To keep the surveys short, minimize patients’ time, and maximize response rate, the items were split across 4 short surveys, with each capturing 1 or 2 selected domains; the 4 unique nonoverlapping surveys were then piloted.

Pilot Test

A pilot study was conducted over a 3-month period in fall 2014 in 2 phases. The goals of the pilot were to assess the surveys regarding: 1) relevance/clarity to patients and 2) acceptability regarding place/time and 3) to estimate baseline top box scores, the percentage of patients scoring an item consistent with the highest quality possible. The first phase was conducted over a 3-week period and included all outpatient cancer clinics that involved oncologist–patient consultations, for a total of 7 physical clinics representing 13 tumor groups. Each clinic distributed surveys for 1 or 2 half-days. Patients randomly received 1 of 4 paper surveys at check-in and completed it before their appointment. In-person interviews were conducted on an arbitrary sample to obtain patient input on the timing of survey receipt, relevance of questions, and whether important issues related to their cancer care were missing. Items were modified based on patient feedback prior to launching the larger second phase of the pilot.

The second phase was conducted daily over an approximate 2-month period in all cancer clinics to further evaluate the items and estimate top box scores. Items with top box scores higher than 80% were considered to have ceiling effects (ie, were unlikely to improve) and were therefore dropped. Patient feedback and top box scores were assessed monthly to inform version change with the following options: items left as is, simplified while maintaining the intent of the original item, or item removed. Internal consistency of the final item sets for each domain was examined using Cronbach’s alpha.

Baseline Data Analysis

The final versions of the 4 surveys were officially launched in the clinics on December 2, 2014. All 4 surveys were given to each clinic and sorted in a random order each day, such that all patients had a 25% chance of receiving a particular survey domain(s); patients were given the survey on the top of the survey pile at the time of their arrival to the clinic. Patients were eligible if they had had 1 or more cancer-related visits in the 3 months prior to their current visit. The first question on each survey was, “Have you had an appointment at the Stanford Cancer Center in the last 3 months?” in order to screen out ineligible patients.

 
Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up