Currently Viewing:
The American Journal of Managed Care November 2004 - Part 2
Screening for Depression and Suicidality in a VA Primary Care Setting: 2 Items Are Better Than 1 Item
Kathryn Corson, PhD; Martha S. Gerrity, MD, MPH, PhD; and Steven K. Dobscha, MD
The Veterans Health Administration: Quality, Value, Accountability, and Information as Transforming Strategies for Patient-Centered Care
Jonathan B. Perlin, MD, PhD, MSHA; Robert M. Kolodner, MD; and Robert H. Roswell, MD
VA Health Services Research: Lessons for the World's Healthcare Organizations
Steven J. Bernstein, MD, MPH
Variation in Implementation and Use of Computerized Clinical Reminders in an Integrated Healthcare System
Constance H. Fung, MD, MSHS; Juliet N. Woods, MS; Steven M. Asch, MD, MPH; Peter Glassman, MBBS, MSc; and Bradley N. Doebbeling, MD, MSc
Dual-system Utilization Affects Regional Variation in Prevention Quality Indicators: The Case of Amputations Among Veterans With Diabetes
Chin-Lin Tseng, DrPH; Jeffrey D. Greenberg, MD, MPH; Drew Helmer, MD, MS; Mangala Rajan, MBA; Anjali Tiwari, MD; Donald Miller, ScD; Stephen Crystal, PhD; Gerald Hawley, RN, MSN; and Leonard Pogach, M
Assessing the Accuracy of Computerized Medication Histories
Peter J. Kaboli, MD, MS; Brad J. McClimon, MD, PharmD; Angela B. Hoth, PharmD; and Mitchell J. Barnett, PharmD, MS
The Relationship of System-Level Quality Improvement With Quality of Depression Care
Andrea Charbonneau, MD, MSc; Victoria Parker, DBA; Mark Meterko, PhD; Amy K. Rosen, PhD; Boris Kader, PhD; Richard R. Owen, MD; Arlene S. Ash, PhD; Jeffrey Whittle, MD, MPH; and Dan R. Berlowitz, MD,
Currently Reading
Designing an Illustrated Patient Satisfaction Instrument for Low-literacy Populations
Janet Weiner, MPH; Abigail Aguirre, MPA; Karima Ravenell, MS; Kim Kovath, VMD; Lindsay McDevit, MD; John Murphy, MD; David A. Asch, MD, MBA; and Judy A. Shea, PhD

Designing an Illustrated Patient Satisfaction Instrument for Low-literacy Populations

Janet Weiner, MPH; Abigail Aguirre, MPA; Karima Ravenell, MS; Kim Kovath, VMD; Lindsay McDevit, MD; John Murphy, MD; David A. Asch, MD, MBA; and Judy A. Shea, PhD

Up to 25% of adults in the United States have difficulty with everyday reading tasks. As patients, adults with low literacy may not be able to complete many self-administered written questionnaires, which often are used to obtain information from patients and to gauge their satisfaction with care. We developed an illustrated version of a patient satisfaction instrument used by the Veterans Health Administration. This paper describes the extensive design process used to develop, pilot-test, and revise this 63-item illustrated instrument. A total of 438 patients were interviewed over a 1-year period to obtain feedback on illustrations, with at least 15 people viewing and commenting on each picture and revision. All pictures were revised, with the majority revised at least 4 times. We report on this iterative design process as well as on lessons we learned in illustrating questions for low-literacy populations.

(Am J Manag Care. 2004;10(part 2):853-860)

Many adults in the United States have difficulty with everyday reading tasks. In 1992, the National Adult Literacy Survey found that 40-44 million people (nearly one quarter of adults) scored in the lowest of 5 levels in reading, writing, and numerical skills.1 Most people who score at this level cannot read and write well enough to meet the needs of everyday living and working. Another 50 million demonstrated skills in the next higher level—meaning that nearly half of the US population lacks the reading skills necessary to function well in an increasingly complex society.

Recently, researchers have documented low levels of "health literacy," defined as the degree to which individuals have the capacity to obtain, process, and understand basic information and services needed to make appropriate decisions regarding their health.2 Contributing to poor health literacy is the consistent finding that the literacy demands of most printed health materials exceed the reading abilities of the average American adult.3 These materials include consent forms, drug package inserts, emergency department and hospital discharge instructions, and patient education brochures.4-8

Healthcare providers and organizations rely on printed material to convey information to patients, as well as to gather information from them. The latter task usually involves completing written forms or questionnaires that provide, for example, critical details about medical history or satisfaction with care, a recognized indicator of quality.9 In addition to facing barriers to using and navigating the healthcare system, patients with low literacy also may have difficulty responding to the system and being full participants in their care. A recent study documented a link between low literacy and participants' inability to accurately complete a written health questionnaire.10

Simplifying the language of written materials can improve their comprehensibility,11,12 although this strategy mostly benefits higher-level readers.13,14 Even materials scored at a fifth-grade reading level may not be understood by about one quarter to one half of many patient populations.15 To improve readability even further, literacy experts recommend visual strategies such as limiting the number of concepts per page, using headers to break up text, using typefaces of 12 points or larger, and illustrating the text.16,17 Health educators have investigated the use of pictorial representations such as photo essays,18 photo novellas,19 and illustrations to improve the readability of written material. Some studies have found that illustrations and graphics improve the comprehensibility of health materials,20-22 while other studies have not.23,24 These findings are consistent with educational research and theory, which indicates that the effectiveness of illustrations seems to vary with the ability of the readers, the type of pictures, and the difficulty of the text.25

Little research exists on whether illustrating written questionnaires improves response rates and accessibility for low-literacy populations. In this project, we sought to develop and test an illustrated version of a patient satisfaction instrument. This task addresses a key element of the national healthcare quality agenda, which defines quality care as effective, safe, timely, and patient centered.26 The new National Healthcare Quality Report emphasizes the need to measure the patient-centeredness of care using instruments that elicit patient perceptions of care.27 Patients with low literacy may be unreachable through conventional text-based instruments and also may have different experiences and perceptions of their healthcare because of their low literacy. This article describes the extensive design process used to produce an illustrated form of the Veterans Health Administration Ambulatory Care Customer Satisfaction Survey.

METHODS

A multidisciplinary team (composed of experts in psychometrics, health services research, medicine, and literacy) and several trained patient interviewers used a combination of qualitative and quantitative methods to develop an illustrated version of the Veterans Health Administration's Ambulatory Care Customer Satisfaction Survey. The Department of Veterans Affairs (VA) Performance Analysis Center for Excellence conducts this survey using an instrument developed by the Picker Institute in Boston.28,29 The questionnaire has 62 items and is designed for written self-administration. It assesses patient satisfaction with recent ambulatory-care encounters (visits in the last 2 months and specifically the most recent visit) along 9 dimensions: access, continuity of care, courtesy, emotional support, patient education, patient preferences, pharmacy, specialist care, and visit coordination of care. There also are scores for overall satisfaction and overall coordination of care. The items require different types of response formats such as rating scales (eg, poor, fair, good, very good, excellent), agreement (eg, yes, completely; yes, somewhat; no); and reporting experience (eg, same day, 1 to 14 days, 15 to 30 days, 61 to 120 days, and more than 120 days). Because measurement properties are sample dependent, there is no single set of summary performance measures. However, in a sample of veterans, the internal consistency reliabilities among the subscales ranged from .59 to .85 (JAS, unpublished data, 2002). We added 1 item to address adequacy of parking, as focus group participants had identified parking as particularly troublesome. The reading level of the VA instrument has not been formally assessed, though item responses are reviewed each year, and items are edited if nonresponse patterns indicate a problem with the item. We found that the questions on the VA instrument had a Flesch Reading Ease score of 76.6 (indicating "easy" or grade-school level) and a Flesch-Kincaid Grade level of score of 6 when analyzed by a computerized program.

The team began by auditioning graphic artists to illustrate the questions. Two artists were chosen to illustrate a sample of questions. The team worked with the artists to brainstorm ideas for depicting the concepts in the survey. Ideas were shared between the 2 artists, who used different styles and often developed different visual concepts to illustrate items.

To begin pretesting, the team conducted 4 focus groups, 2 at the VA Medical Center in Philadelphia and 2 at a nearby academic medical center hospital. A total of approximately 200 patients scheduled for visits on designated days were sent letters inviting them to participate in a focus group during the lunch hour, either before or after their clinic visit. About a week after the letters went out, a research assistant called patients to ask them if they would like to participate. Once the target number per group was reached, no more patients were called. Twelve patients were scheduled per group, and a total of 31 participated. Participants ranged in age from 21 to 76 years of age (mean age of 57 years). Education levels were high school diploma or less (39%), some college (39%), and college degree or more (19%). Fifty-two percent of the participants were African American, and 32% were women. At the VA sites, 20 of 21 were men. At the academic medical center, 9 of 10 were women. Literacy level was not assessed in this phase of the study.

After the moderator explained the nature of the survey and the task of creating an instrument that was easier to read, each group was shown illustrations without and then with the written questions. The moderator asked participants a series of semistructured questions about how they would interpret the drawings and which style of drawings they preferred.

After these focus groups, the team conducted the remainder of the pilot testing through one-on-one interviews with patients drawn from the same 4 clinic sites. These interviews were preferable to focus groups because they were more appropriate for the task (assessing individual understanding) and more efficient (many more patients could be interviewed). Patients were recruited in an area that served as a waiting room for both the primary care clinics and the outpatient pharmacy. Interviewers approached waiting patients, introduced the study, and collected demographic information (sex, age, education, and race). Approximately 85% of the patients approached agreed to participate in the study. There were no differences in demographics between those who agreed to participate and those who did not. Interviews were conducted in a corner of the waiting area where clinic personnel provided a table and chairs, slightly removed from the main waiting area but still allowing the patient to hear his/her name if called for the appointment. Patients were eligible if they were at least 18 years old and were current patients at the outpatient clinics. All aspects of the study (focus groups and interviews) were approved by the institutional review boards at the Philadelphia Veterans Affairs Medical Center and the University of Pennsylvania. Oral consent was obtained. Both institutional review boards waived written consent.

In the interview, patients were asked to review a set of 4-6 pictures. Interviewers worked in pairs. One conducted the interview, while another took notes. The interview started with a sample picture (held constant across all interviews). The interviewer showed the patient a picture without any text and asked, "What do you think is going on in this picture?" Then the interviewer showed the patient the picture with the text and asked, "What question do you think we are trying to ask here?" Lastly, the interviewer asked, "How would you answer this question?" The interviewer rated the patients on their understanding of the picture (without text) and their understanding of the question (pictures and text) using a 3-point scale (1 = yes, 2 = partially, 3 = no). After each interview, the interviewer and note taker reviewed the standardized rating form that documented the patient's responses and the note taker's preliminary judgments, and came to consensus on the ratings for the patients' understanding of the picture and the question. The team reviewed free-text comments about aspects of the pictures that patients did not understand, and used this feedback to revise many of the pictures.

RESULTS

A total of 438 patients were recruited on-site and interviewed over a 1-year period. Their mean age was 51 years, and 72% were men. Approximately 51% of patients interviewed had a high school diploma or less. Participants identified themselves as African American (59%), white (23%), or Hispanic or other (18%).

 
Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up