• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Designing an Illustrated Patient Satisfaction Instrument for Low-literacy Populations

Publication
Article
The American Journal of Managed CareNovember 2004 - Part 2
Volume 10
Issue 11 Pt 2

Up to 25% of adults in the United States have difficulty witheveryday reading tasks. As patients, adults with low literacy maynot be able to complete many self-administered written questionnaires,which often are used to obtain information from patientsand to gauge their satisfaction with care. We developed an illustratedversion of a patient satisfaction instrument used by theVeterans Health Administration. This paper describes the extensivedesign process used to develop, pilot-test, and revise this 63-itemillustrated instrument. A total of 438 patients were interviewed overa 1-year period to obtain feedback on illustrations, with at least 15people viewing and commenting on each picture and revision. Allpictures were revised, with the majority revised at least 4 times. Wereport on this iterative design process as well as on lessons welearned in illustrating questions for low-literacy populations.

(Am J Manag Care. 2004;10(part 2):853-860)

Many adults in the United States have difficultywith everyday reading tasks. In 1992, theNational Adult Literacy Survey found that 40-44 million people (nearly one quarter of adults) scoredin the lowest of 5 levels in reading, writing, and numericalskills.1 Most people who score at this level cannotread and write well enough to meet the needs of everydayliving and working. Another 50 million demonstratedskills in the next higher level—meaning that nearlyhalf of the US population lacks the reading skills necessaryto function well in an increasingly complex society.

Recently, researchers have documented low levels of"health literacy," defined as the degree to which individualshave the capacity to obtain, process, and understandbasic information and services needed to makeappropriate decisions regarding their health.2 Contributingto poor health literacy is the consistent findingthat the literacy demands of most printed health materialsexceed the reading abilities of the averageAmerican adult.3 These materials include consentforms, drug package inserts, emergency department andhospital discharge instructions, and patient educationbrochures.4-8

to

from

Healthcare providers and organizations rely onprinted material to convey information patients, aswell as to gather information them. The latter taskusually involves completing written forms or questionnairesthat provide, for example, critical details aboutmedical history or satisfaction with care, a recognizedindicator of quality.9 In addition to facing barriers tousing and navigating the healthcare system, patientswith low literacy also may have difficulty responding tothe system and being full participants in their care. Arecent study documented a link between low literacyand participants' inability to accurately complete a writtenhealth questionnaire.10

Simplifying the language of written materials canimprove their comprehensibility,11,12 although thisstrategy mostly benefits higher-level readers.13,14 Evenmaterials scored at a fifth-grade reading level may notbe understood by about one quarter to one half of manypatient populations.15 To improve readability even further,literacy experts recommend visual strategies suchas limiting the number of concepts per page, using headersto break up text, using typefaces of 12 points orlarger, and illustrating the text.16,17Health educators have investigated the use of pictorialrepresentations such as photo essays,18 photo novellas,19 and illustrations to improve the readability ofwritten material. Some studies have found that illustrationsand graphics improve the comprehensibility ofhealth materials,20-22 while other studies have not.23,24These findings are consistent with educational researchand theory, which indicates that the effectiveness ofillustrations seems to vary with the ability of the readers,the type of pictures, and the difficulty of the text.25

Little research exists on whether illustrating writtenquestionnaires improves response rates and accessibilityfor low-literacy populations. In this project, wesought to develop and test an illustrated version of apatient satisfaction instrument. This task addresses akey element of the national healthcare quality agenda,which defines quality care as effective, safe, timely, andpatient centered.26 The new National HealthcareQuality Report emphasizes the need to measure thepatient-centeredness of care using instruments thatelicit patient perceptions of care.27 Patients with low literacymay be unreachable through conventional text-basedinstruments and also may have differentexperiences and perceptions of their healthcarebecause of their low literacy. This article describes theextensive design process used to produce an illustratedform of the Veterans Health Administration AmbulatoryCare Customer Satisfaction Survey.

METHODS

A multidisciplinary team (composed of experts inpsychometrics, health services research, medicine, andliteracy) and several trained patient interviewers used acombination of qualitative and quantitative methods todevelop an illustrated version of the Veterans HealthAdministration's Ambulatory Care Customer SatisfactionSurvey. The Department of Veterans Affairs (VA)Performance Analysis Center for Excellence conductsthis survey using an instrument developed by the PickerInstitute in Boston.28,29 The questionnaire has 62 itemsand is designed for written self-administration. It assessespatient satisfaction with recent ambulatory-careencounters (visits in the last 2 months and specificallythe most recent visit) along 9 dimensions: access, continuityof care, courtesy, emotional support, patient education,patient preferences, pharmacy, specialist care,and visit coordination of care. There also are scores foroverall satisfaction and overall coordination of care. Theitems require different types of response formats such asrating scales (eg, poor, fair, good, very good, excellent),agreement (eg, yes, completely; yes, somewhat; no); andreporting experience (eg, same day, 1 to 14 days, 15 to30 days, 61 to 120 days, and more than 120 days).Because measurement properties are sample dependent,there is no single set of summary performance measures.However, in a sample of veterans, the internal consistencyreliabilities among the subscales ranged from .59to .85 (JAS, unpublished data, 2002). We added 1 itemto address adequacy of parking, as focus group participantshad identified parking as particularly troublesome.The reading level of the VA instrument has not been formallyassessed, though item responses are reviewedeach year, and items are edited if nonresponse patternsindicate a problem with the item. We found that thequestions on the VA instrument had a Flesch ReadingEase score of 76.6 (indicating "easy" or grade-schoollevel) and a Flesch-Kincaid Grade level of score of 6when analyzed by a computerized program.

The team began by auditioning graphic artists toillustrate the questions. Two artists were chosen toillustrate a sample of questions. The team worked withthe artists to brainstorm ideas for depicting the conceptsin the survey. Ideas were shared between the 2artists, who used different styles and often developeddifferent visual concepts to illustrate items.

To begin pretesting, the team conducted 4 focusgroups, 2 at the VA Medical Center in Philadelphia and 2at a nearby academic medical center hospital. A total ofapproximately 200 patients scheduled for visits on designateddays were sent letters inviting them to participatein a focus group during the lunch hour, either before orafter their clinic visit. About a week after the letters wentout, a research assistant called patients to ask them ifthey would like to participate. Once the target numberper group was reached, no more patients were called.Twelve patients were scheduled per group, and a total of31 participated. Participants ranged in age from 21 to 76years of age (mean age of 57 years). Education levels werehigh school diploma or less (39%), some college (39%),and college degree or more (19%). Fifty-two percent ofthe participants were African American, and 32% werewomen. At the VA sites, 20 of 21 were men. At the academicmedical center, 9 of 10 were women. Literacy levelwas not assessed in this phase of the study.

After the moderator explained the nature of the surveyand the task of creating an instrument that was easierto read, each group was shown illustrations withoutand then with the written questions. The moderatorasked participants a series of semistructured questionsabout how they would interpret the drawings and whichstyle of drawings they preferred.

After these focus groups, the team conducted theremainder of the pilot testing through one-on-one interviewswith patients drawn from the same 4 clinic sites.These interviews were preferable to focus groupsbecause they were more appropriate for the task(assessing individual understanding) and more efficient(many more patients could be interviewed). Patientswere recruited in an area that served as a waiting roomfor both the primary care clinics and the outpatientpharmacy. Interviewers approached waiting patients,introduced the study, and collected demographic information(sex, age, education, and race). Approximately85% of the patients approached agreed to participate inthe study. There were no differences in demographicsbetween those who agreed to participate and those whodid not. Interviews were conducted in a corner of thewaiting area where clinic personnel provided a tableand chairs, slightly removed from the main waiting areabut still allowing the patient to hear his/her name ifcalled for the appointment. Patients were eligible if theywere at least 18 years old and were current patients atthe outpatient clinics. All aspects of the study (focusgroups and interviews) were approved by the institutionalreview boards at the Philadelphia Veterans AffairsMedical Center and the University of Pennsylvania. Oralconsent was obtained. Both institutional review boardswaived written consent.

In the interview, patients were asked to review a setof 4-6 pictures. Interviewers worked in pairs. One conductedthe interview, while another took notes. Theinterview started with a sample picture (held constantacross all interviews). The interviewer showed thepatient a picture without any text and asked, "What doyou think is going on in this picture?" Then the interviewershowed the patient the picture with the text andasked, "What question do you think we are trying to askhere?" Lastly, the interviewer asked, "How would youanswer this question?" The interviewer rated thepatients on their understanding of the picture (withouttext) and their understanding of the question (picturesand text) using a 3-point scale (1 = yes, 2 = partially, 3 =no). After each interview, the interviewer and note takerreviewed the standardized rating form that documentedthe patient's responses and the note taker's preliminaryjudgments, and came to consensus on the ratings for thepatients' understanding of the picture and the question.The team reviewed free-text comments about aspects ofthe pictures that patients did not understand, and usedthis feedback to revise many of the pictures.

RESULTS

A total of 438 patients were recruited on-site andinterviewed over a 1-year period. Their mean age was51 years, and 72% were men. Approximately 51% of patientsinterviewed had a highschool diploma or less. Participantsidentified themselves asAfrican American (59%), white(23%), or Hispanic or other (18%).

Modifications to illustrationswere made iteratively, based primarilyon patients' feedback.However, the study team itselfbecame more active in critiquingillustrations as it gained expertiseabout potential patient comprehensionproblems. All 63 pictureswere revised, many of them multipletimes. At least 15 peopleviewed and commented on each ofthe 63 illustrated items. The revised pictures also werereviewed by at least 15 people. The final illustratedbooklet contained 22 pictures that had a rating of"understood," 39 pictures that had a rating of "partiallyunderstood," and 2 pictures that had a rating of "notunderstood." The Table summarizes the number oftimes items were revised. For most illustrations, thefinal revision was to make text fonts and details such ascollars on shirts consistent with all other illustrations.As shown, 2 items could not be illustrated effectively;however, these illustrations were retained to keep thebooklet consistently illustrated, and to remain true tothe original questionnaire.

Lessons Learned

The multiple rounds of pretesting of the items generatedvaluable information on the comprehension andacceptability of the draft illustrations. We report belowon some of the lessons learned.

Artistic style matters—people prefer realistic illustrationsand familiar circumstances.

The two artistshad different artistic and communicative styles. Figures1A and 1B show their initial attempts to illustrate question17: "Did you have trouble understanding theprovider because of a language problem?" Pretestingrevealed that patients preferred the artistic style ofFigure 1B, because it was more realistic and lessabstract. However, more patients understood the questionin Figure 1A, misinterpreting the various symbolsin the bubble in Figure 1B as curses. Figure 1C showsthe final version, which blends successful elements ofthe previous drawings such as saying hello, using thelight bulb as a symbol of understanding, and using thequestion mark as a symbol of confusion. In general, thedrawings worked better when they portrayed everydaysituations. It might be noted, however, that preferencefor a style was not necessarily related to the comprehensibility of the message conveyed with that style, justas someone might find typewritten text very readable,but calligraphy more attractive.

Using multiple illustrationsfor each question-and-answeroption creates visual clutterand obscures meaning.

Theteam initially faced decisionsabout what to illustrate in thequestionnaire—the questions(stems), the answers (responseoptions), or both. The strategyselected was to pick the mainconcepts in the stem and usethem to support answer options.For example, Figure 2A showsmultiple concepts from the stemfor question 47: "Did you knowwhom to ask when you hadquestions about your healthcare?" The concepts chosen forillustration were "questions,""healthcare (eg, medications,immunizations, electrocardiograms),"and "whom to ask." Inaddition, we tried to portray theprocess of having a question(top frames) and knowingwhom to ask (bottom frames).Figure 2B shows the final version.Two concepts wereretained ("questions" and"whom to ask") in a greatly simplifiedformat. In the end, mostillustrations focused on just 1concept.

The group also needed todecide whether to illustrate allanswer options, especiallywhen the answers were scaled.Figure 3A shows initial drawingsillustrating the 3 optionsfor question 33: "How wellorganized was the clinic?"Pretesting revealed that illustratingjust the end points(rather than every option) ledto better patient understandingand also reduced patient confusionby eliminating visual clutter.We also experimented witha single illustration conveyingthe general theme, rather than2 illustrations conveying the scale end points. Figure3B shows the final drawing illustrating the end points,with an arrow connoting the scale.

Simple illustrationswork best.

The group initiallytried to eliminateany reliance onreading ability byillustrating as manyaspects of the questionsand answersas possible, and notusing any words.However, thisapproach resultedin too many picturesand too muchdetail for thepatients to follow.As the pictures progressed,the artistsremoved extraneousdetails and simplifiedtheirdrawings. Forexample, Figure 4Ashows the initialdrawings for question27: "Did youspend as much timewith your provideras you wanted?"Patients did notunderstand the useof split frames to communicate the beginning and end ofa visit; further, the details of the provider's office (eg,door, diploma, clock on the wall, examination table)obscured the message. Figure 4B shows the drawingsafter multiple revisions. The final version features simple,uncluttered cues for "provider" (a stethoscope),"time" (looking at a watch), and meeting expectations(same simple text in thought bubble, with patient andprovider smiling). In addition to simplicity, we foundthat patients' interpretations relied heavily on the characters'facial expressions. They quickly keyed in on theexpressions and made appropriate interpretations of the"good" and "bad" situations.

A few words go a long way.

Initially, the team usedno text in the illustrations, in an attempt to develop aquestionnaire that could be understood by patientswith practically no reading ability. However, we soonfound that use of simple words and repeating key wordsfrom questions actually supported understanding. Thefinal illustrations shown in Figures 1C and 4B use textsuccessfully.

Mathematical symbols and graphics with numbersmay not work.

Many of the initial drawings sought tocommunicate aspects of time through clocks and calendars,and used mathematical symbols for "same as" and"less than." Figure 4A shows initial attempts to useclocks to illustrate whether time expectations were met.Patient interviews revealed that clocks, as symbolic representationsof time, did not work for our audience. Thegroup experimented with the use of arrows and shadingon clocks, but understanding was not improved. AsFigure 4B shows, in the final version, the clocks wereeliminated, and words, gestures, and facial expressionswere used to communicate satisfaction or dissatisfactionwith the time spent.

In interpreting illustrations, patients bring theirstereotypes with them.

The team made a conscious effortto use both women and men and people of different racesand ethnicities in the roles of patients, staff, andproviders. Early on we learned that some gender-basedassumptions hindered understanding of the pictures. Forexample, patients tended to identify women as nurses,even with lab coats. We debated butrejected the idea of accepting thisstereotype to improve understanding(by portraying all physicians as men).Instead, we chose other visual clues(eg, lengthening the coats of all physicians,adding stethoscopes for allproviders) to distinguish providersfrom clinic staff, and to delineatepatients and providers. Unlike gender,race- and ethnicity-basedassumptions did not affect understandingof the illustrations in thisethnically diverse patient population.Patients rarely mentioned race whencommenting on the illustrations, andsaid it was not an issue when askeddirectly about racial portrayals.

Consistent visual cues help getthe point across.

Using the same pictureor symbol in several illustrationshelps give patients a sense of themeaning of a picture, even if they didnot understand its use initially.Examples of standard visual cuesinclude a light bulb (for understanding),question marks (for confusion),scribbles (for incomprehensible language),test tubes (for tests), long labcoats (for physicians), and a stethoscope(for a healthcare provider).Once pilot testing provided evidencethat these standard cues were understandable,the artists revised illustrationsso that these cues were usedconsistently.

Some concepts remain difficult totranslate well into pictures andgraphics.

Despite applying all of thelessons learned regarding simplificationand judicious use of words, someconcepts remained difficult to convey.It became apparent throughpretesting that "specialist" could notbe successfully communicated withoutsome words. Figure 5A shows initialillustrations of the question 41:"Overall, how would you rate thequality of your most recent specialistvisit?" Using the symbol of a heartand a caduceus did not allow patientsto identify the provider as a specialist.Figure 5B shows the final versionof the illustration, in which thespecialist is labeled.

One of the most challengingconcepts to portray was theanswer option "not applicable,"which was available for 8 questions.Many attempts weremade to illustrate that optionfor each question, or to developa symbol for "not applicable"that could be used each time.Figure 5A shows an attempt toillustrate this option by usingthe universal circle and slashsymbol. Patients did not understandthis symbol in this context,nor did they understandother illustrations for the option.In the end, as indicated in Figure5B, the option was "illustrated"by repeating the words in awhite box with a larger font.

DISCUSSION

Increasingly, patient satisfactionis viewed as a criterion bywhich the quality of healthcareservices can be measured.However, most evaluations ofpatient satisfaction rely on self-administered writtenquestionnaires, which may lie beyond the patient's abilityto complete. Because patients with low health literacyreport poorer health status and less use of preventive services3and may face greater barriers to accessing and navigatingthe healthcare system, it is especially important todevelop instruments that can reach this population.

The value of illustrations, especially in questionnaires,remains unknown. (The instrument currently beingevaluated is a 24-page booklet stapled in the middle, witha finished size of 8½ by 11, a large (14-point) serif font,2 questions per page, and generous use of white space,printed in black and white to make it easy to reproduce.)A recent study found that an illustrated version of a 10-item adult Dermatology Life Quality Index was superiorto a text-only version in terms of patient preference andease of use, but equivalence between the versions couldnot be demonstrated.30 Other investigators have developedillustrated instruments for young children, mostnotably to assess psychiatric symptoms, but they areadministered by the interviewer.31,32 For a target audienceof adults, however, others have noted the importanceof a rigorous design process for the success ofpictures as a communication aid for patients and families.33,34 To produce clear, culturally acceptable pictures,it is particularly important to collaborate with the targetpopulation.35

Many of these findings confirm the advice of literacyexperts in terms of using simple drawings, familiarimages, and realistic illustrations, rather than cartoon-likeor stick-figure sketches.36 However, this patientpopulation did not voice strong preferences for illustrationsof people of their own race or sex. In general,patients appreciated racial and ethnic diversity in thepictures, but did not require a specific racial compositionacross the pictures.

Some questions remained difficult to illustrate,even after applying all lessons that were learned alongthe way. Our inability to produce illustrations that wereuniformly understood could be due to the complexityof the questions in their written form, or to a more fundamentaldifficulty in graphically representing abstractconcepts. In either case, these issues should be consideredwhen deciding whether to enhance an existinginstrument with illustrations, or to develop a new illustratedinstrument for low-literacy populations.

We translated an existing instrument into pictures andmade the decision to stay true to the original text. Wesuspect our task would have been easier had the originalinstrument been simpler or, better still, had it been developedfrom the start with the idea of producing paralleltext and illustrated versions. Under that circumstance,item development might have been based in part onpotential ease of illustration.

Finally, the failure of clocks, calendars, and mathematicalsymbols in pretesting may reflect the strongcorrelation between prose literacy and numeracy(quantitative communication skills).1 Patients with lowliteracy (as well as some with high literacy) often havedifficulty decoding quantitative symbols and representationssuch as charts and graphs. Our findings pointout the importance of pretesting to ascertain the abilityof the target audience to interpret these conventionsthat, at least on the surface, appear to communicateconcepts without words. In some cases, a picture (orsymbol) is worth far less than 1000 words.

From the Center for Health Equity Research and Promotion, Philadelphia VeteransAffairs Medical Center, Philadelphia, Pa (JW, AA, KR, KK, LM, JM, DAA, JAS); the Divisionof General Internal Medicine, University of Pennsylvania, Philadelphia (JW, AA, KR, JM,DAA, JAS); and the Leonard Davis Institute of Health Economics, University of Pennsylvania(JW, DAA, JAS).

This work was supported by grant PCC-98-0871-1 from the Health Research andDevelopment Service, Department of Veterans Affairs.

Address correspondence to: Janet Weiner, MPH, Leonard Davis Institute ofHealth Economics, 3641 Locust Walk, Philadelphia, PA 19104-6218. E-mail: weinerja@mail.med.upenn.edu.

Adult Literacy in America: A FirstLook at the Findings of the National Adult Literacy Survey.

1. Kirsch IS, Jungeblut A, Jenkins L, Kolstad A. US Department ofEducation, National Center for Educational Statistics, National Adult LiteracySurvey. Princeton, NJ: Educational Testing Service; 1993.

National Library of Medicine Current Bibliographies in Medicine: HealthLiteracy.

2. Ratzan SC, Parker RM. Introduction. In: Selden CR, Zorn M, Ratzan SC, ParkerRM, eds. Bethesda, Md: National Institutes of Health; 2000.

Health Literacy: a Prescription to End Confusion.

3. Institute on Medicine. Washington, DC: National Academies Press; 2004.

J Fam Pract.

4. Goldstein AO, Frasier P, Curtis P, Reid A, Kreher NE. Consent form readabilityin university-sponsored research. 1996;42:606-611.

Surgery.

5. Hopper KD, TenHave TR, Tully DA, Hall TEL. The readability of currently usedsurgical procedure consent forms in the United States. 1998;123:496-503.

JEmerg Nursing.

6. Duffy MM, Snyder K. Can ED patients read your patient education materials? 1999;25:294-297.

J Clin Pharm Ther.

7. Bradley B, Singleton M, Li Wan Po A. Readability of patient information leafletson over-the-counter medications. 1994;19:7-15.

ArthritisCare Res.

8. Larson I, Schumacher HR. Comparison of literacy level of patients in a VAarthritis center with the reading level required by educational materials. 1992;5:13-16.

Inquiry.

9. Cleary PD, McNeil BJ. Patient satisfaction as an indicator of quality care. 1988;25:25.

Am J Public Health.

10. Al-Tayyib AA, Rogers SM, Gribble JN, Villarroel M, Turner CF. Effect of lowmedical literacy on health survey measurements. 2002;92:1478-1481.

Am J Public Health.

11. Meade CD, Byrd JC, Lee M. Improving patient comprehension of literature onsmoking. 1989;79:1411-1412.

IRB.

12. Young DR, Hooker DT, Freeberg PE. Informed consent documents: increasingcomprehension by reducing reading level. May-June 1990;12:1-5.

Annual Reviewof Adult Learning and Literacy.

13. Rudd RE, Moeykens BA, Colton TC. Health and literacy: a review of medicaland public health literature. In: Comings J, Garner B, Smith C, eds. Vol 1. San Francisco, Calif: Jossey-Bass; 2000.

Public Health Rep.

14. Plimpton S, Root J. Materials and strategies that work in low literacy healthcommunication. 1994;109:86-92.

J Health Politics Policy Law.

15. Root J, Stableford S. Easy-to-read consumer communications: a missing link inMedicaid managed care. 1999;24:1-26.

Teaching Patients with Low Literacy Skills.

16. Doak CC, Doak L, Root J. 2nd ed.Philadelphia, Pa: JB Lippincott; 1996.

Clear and Simple: Developing Effective PrintMaterials for Low-Literate Readers.

17. National Cancer Institute. 1995. Updated February 27, 2003. NIH publication95-3594. Available at: http://cancer.gov/cancerinformation/clearandsimple.Accessed September 10, 2004.

J Cancer Educ.

18. Paskett ED, Tatum C, Wilson A, Dignan M, Velez R. Use of photoessay toteach low-income African American women about mammography. 1996;11:216-220.

Health Educ Q.

19. Rudd RE, Comings J. Learner developed materials: an empowering product. 1994;21:33-47.

J Cancer Educ.

20. Michielutte R, Bahnson J, Dignan MB, Schroeder EM. The use of illustrationsand narrative text style to improve readability of a health education brochure. 1992;7:251-260.

Acad Emerg Med.

21. Delp C, Jones J. Communicating information to patients: the use of cartoonillustrations to improve comprehension of instructions. 1996;3:264-270.

Ann Emerg Med.

22. Austin PE, Matlack R, Dunn KA, et al. Discharge instructions: do illustrationshelp our patients understand them? 1995;25:317-320.

Patient Educ Couns.

23. Davis TC, Frederickson DD, Arnold C, Murphy PW, Herbst M, Bocchini JA.A polio immunization pamphlet with increased appeal and simplified languagedoes not improve comprehension to an acceptable level. 1998;33:25-37.

Pediatrics.

24. Powell EC, Tanz RR, Uyeda A, Gaffney MB, Sheehan KM. Injury preventioneducation using pictorial information. 2000;105:e16.

Educ Res.

25. Filippatou D, Pumfrey PD. Pictures, titles, reading accuracy and reading comprehension:a research review (1973-1995). 1996;38:259-291.

Crossing the Quality Chasm.

26. Committee on the Quality of Health Care in America, Institute of Medicine. Washington, DC: National Academy Press; 2001.

National Healthcare QualityReport: Summary.

27. Agency for Healthcare Research and Quality. December 2003. AHRQ publication 04-RG003. Available at:http://www.ahrq.gov/qual/nhqr03/nhqrsum03.htm. Accessed September 10, 2004.

Med Care.

28. Halpern J. The measurement of quality of care in the Veterans HealthAdministration. 1996;34:MS55-MS68.

Performance on Customer Service Standards.

29. Ambulatory care. In: 1999National Survey Report. Morrisville, NC: National Performance Data FeedbackCenter; 2000.

Br J Dermatol.

30. Loo WJ, Diba V, Chawla M, Finlay AY. Dermatology Life Quality Index: influenceof an illustrated version. 2003;148:279-284.

J Am Acad Child Adolesc Psychiatry.

31. Valla JP, Bergeron L, Smolla N. The Dominic-R: a pictorial interview for 6- to11-year-old children. 2000;39:85-93.

J Am Acad Child Adolesc Psychiatry.

32. Ernst M, Cookus BA, Moravec BC. Pictorial instrument for children and adolescents(PICA-III-R). 2000;39:94-99.

Int J Pharm Pract.

33. Dowse R, Ehlers MS. Pictograms in pharmacy. 1998;6:109-118.

J Family Violence.

34. Tymchuk AJ, Lang CM, Sewards SE, Lieberman S, Loo S. Development andvalidation of the illustrated version of the Home Inventory for Dangers and SafetyPrecautions. 2003;18:241-252.

Patient Educ Couns.

35. Dowse R, Ehlers MS. The evaluation of pharmaceutical pictograms in a low-literateSouth African population. 2001;45:87-99.

Immunization and Child HealthMaterials Development Guide.

36. Younger E, Wittet S, Hooks C, Lasher H. Seattle, Wash: PATH; 2001.

Related Videos
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.