Evidence-Based Oncology
November 2014
Volume 20
Issue SP16

The Challenge of Developing the Ideal Patient Satisfaction Survey

Cancer continues to be one of the leading causes of morbidity and mortality in the United States. For many patients, the mental, physical, and emotional stress of cancer can be overwhelming, greatly affecting many aspects of their lives. Cancer patients can typically experience anxiety, traumatic stress, or depression; long waiting times, poor communication between staff and patients, lack of information, and the lack of psychosocial care could be the primary triggers of stress. Healthcare providers are therefore faced with multiple demands to satisfy the complex needs of these patients.

The complexity of cancer care today—with new diagnostics, treatments options, supportive care, and rehabilitation—speaks to the multidisciplinary approach to patient care that needs monitoring to identify areas of improvement. Patient satisfaction may also vary because of factors outside the provider’s control, including the limitations

of existing cancer treatment in some instances. To gain an improved understanding of healthcare quality, patients should be surveyed regularly using patient satisfaction surveys. This allows providers and institutions to systematically evaluate patient perception and satisfaction with the care rendered.1,2

This process does not seem challenging. Providers want to know if patients are satisfied with their services and whether changes need to be implemented. Wouldn’t patients share that information? If you operate an oncology treatment center, you’ve certainly heard your fair share of compliments and complaints. How do you determine if those compliments can be applied across the full practice, and how do you distinguish complaints about systemic issues from immediate, easy-to-address problems? The optimal solution is to field a repeating survey, with the appropriate questions, to a broad spectrum of patients, in adequate numbers. This would allow for comparisons among physicians and locations and also evaluate performances against national benchmarks.

Despite the broad choice of survey instruments and the companies that provide assistance in collecting and analyzing the responses, questions remain. How do you identify the right questionnaire? How often should it be distributed and by what means? What do you do with the results? Are they actionable? Are they comparable? Although not yet required for a private practice, New England Cancer Specialists fielded a patient satisfaction survey 6 years ago to develop a reliable way to grade our practice.

The choice of survey method—paper versus electronic—can be important. Several studies have demonstrated that electronic collection of patient information facilitates the sending of realtime feedback to healthcare providers. Some of the advantages include: more clinician queries about health-related quality-of-life issues, improved provider-perceived communication with patients, and increased provider-perceived tracking of quality-of-life changes over time.3 Disadvantages include technology

intimidation, lack of access to a computer or the Internet, and the difficulty of achieving adequate participation.4

New England Cancer Specialists’ first venture into a patient satisfaction survey was in 2008. We developed our own survey covering 10 general areas. Patients used a 5-point scale to rate us on ease and access, the telephone system, waiting times for appointments, physician/providers, nursing staff, general staff, financial (billing and advocacy), confidentiality, and the physical facility. More than 350 questionnaires were distributed every 6 months. The survey was short, patients had no issues completing it, and it generated a 95% response rate. It was, however, labor-intensive for the practice: it covered 4 offices and 14 physicians, and was tabulated by hand. Significantly, our facility was not comparable to any other, which made it difficult to understand the magnitude of any issue. Results were outstanding—97% satisfaction across the board. Our lowest rating was for wait time; 86% of patients were satisfied with the wait time: only 14% rating us at 3 or less.5 Wait time is a very important component of overall satisfaction, as stress levels can rise significantly with longer wait times.

In 2011, New England Cancer Specialists began to use The Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CGCAHPS) survey—a standardized tool to measure patient perceptions of care delivered by a provider (eg, physician, nurse practitioner, physician assistant) in an office setting. The survey was 8 pages long, with 47 questions. There was a significant reduction in the labor needed to field the study, but there was also a significant increase in cost to the practice. We hired a national research service company to distribute the survey, which was mailed within 2 weeks of a patient visit. Completed surveys were sent back to the service and entered into a database, which allowed comparison with other institutions; however, there was no benchmarking capability for oncology. We could compare ourselves with dermatology, family medicine, or otolaryngology practices, but not oncology ones. Response rates dropped to 42.9% and time points were often difficult to compare due to lack of an adequate number of responses.

A key issue for us was the inability to adjust this validated survey per our requirements. Questions such as, “Did you talk with this doctor about any health problems or concerns?” had patients calling or bringing in the survey asking if we knew they had cancer. Regardless of this and the other issues cited, New England Cancer Specialists achieved consistently high satisfaction ratings. Overall scores were in the mid to high 80s, with Key Driver scores in the 90s—always above the national average.

In 2013, New England Cancer Specialists became part of the COME HOME Project, which offered access to the Oncology Medical Home (OMH) Consumer Assessment of Healthcare Providers and Systems (CAHPS). This survey adjusted questions on the CGCAHPS to better match the needs of an oncology practice. Cancer Care providers utilizing it could benchmark against similar institutions across the country in the areas of timeliness, thoroughness, communication, and friendliness—key areas in patient satisfaction. We are now in a better position to understand whether problems are specific to our practice or endemic across all oncology centers. In addition to being able to compare our providers with one another, or with providers at other institutions, we can compare time points in order to pinpoint changes in satisfaction levels.

The survey on the OMH website can be completed in one of 3 ways: via iPads provided to the patients, a link on our website, or hard copy. Each method has pros and cons, with the largest variables observed in response rates and timeliness of returns.

We chose to distribute hard copy surveys to retain control of the speed and number of responses collected and to promptly address any shortfalls. Surveys were anonymous and were dropped into collection boxes set discreetly throughout the practice. The OMH survey is shorter than the CGCAHPS survey, and our response rate is back up in the 90s. While using the hard copy survey means transcribing the data manually, once that is done it’s easily uploaded into the OMH database and comparative results can be viewed within 24 hours. We have utilized the data to date in a number of different ways. Patient ratings on wait times for each physician were shared with all providers, creating instant awareness of the issue and schedule adjustments. Additionally, our phone system was improved to better accommodate incoming calls.

To date, more than a 1100 oncologists are registered and approximately 26,306 surveys have been collected, with results available online. The OMH Survey has been officially approved by CAHPS, and is now recognized by federal and commercial insurance companies for meeting their requirements for a patient satisfaction survey.


Our only remaining problem? To devise a strategy that could improve on the 95% satisfaction levels. This is a very good problem to have to solve. Address correspondence to: Betsy Chase, Research Director, New England Cancer Specialists.

E-mail: chaseb@newecs.orgReferences

1. Lis CG, Rodeghier M, Gupta D. Distribution and determinants of patient satisfaction in oncology: a review of the literature. Patient Prefer Adherence. 2009;3:287-304.

2. Velikova G, Brown JM, Smith AB, Selby PJ. Computer-based quality of life questionnaires may contribute to doctor-patient interactions in oncology. Br J Cancer. 2002;86(1):51-59.

3. Kamo N, Dandapani SV, Miksad RA, et al. Evaluation of the SCA instrument for measuring patient satisfaction with cancer care administered via paper or via the Internet. Ann Oncol. 2011;22(3):723-729.

4. Taenzer P, Bultz BD, Carlson LE, et al. Impact of computerized quality of life screening on physician behaviour and patient satisfaction in lung cancer outpatients. Psychooncology. 2000;9(3):203-213.

5. New England Cancer Specialists Internal Data.

Related Videos
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
Shawn Tuma, JD, CIPP/US, cybersecurity and data privacy attorney, Spencer Fane LLP
Judith Alberto, MHA, RPh, BCOP, director of clinical initiatives, Community Oncology Alliance
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
Will Shapiro, vice president of data science, Flatiron Health
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
Mila Felder, MD, FACEP, emergency physician and vice president for Well-Being for All Teammates, Advocate Health
Will Shapiro, vice president of data science, Flatiron Health
Jonathan E. Levitt, Esq, Frier Levitt, LLC
Related Content
CH LogoCenter for Biosimilars Logo