Objective: To determine whether extractable blood pressure(BP) information available in a computerized patient record system(CPRS) could be used to assess quality of hypertension care independentlyof clinicians' notes.
Study Design: Retrospective cohort study of a random sampleof hypertensive patients from 10 Department of Veterans Affairs(VA) sites across the country.
Methods: We abstracted BPs from electronic clinicians' notesfor all medical visits of 981 hypertensive patients in 1999. Wecompared these with BP measurements available in a separatevitals signs file in the CPRS. We also evaluated whether assessmentsof performance varied by source by using patients' last documentedBP reading.
Results: When the vital signs file and notes were combined, aBP measurement was taken for 71% of 6097 medical visits; 60% hada BP measurement only in the vital signs file. Combining sources,43% of patients had a BP reading of less than 140/90 mm Hg; by sitethis varied (34%-51%). Vital signs file data alone yielded similarfindings; site rankings by rates of BP control changed minimally.
Conclusions: Current performance review programs collectclinical data from both clinicians' notes and automated sources asavailable. However, we found that notes contribute little informationwith respect to BP values beyond automated data alone. TheVA's vital signs file is a prototypical automated data system thatcould make assessment of hypertension care more efficient inmany settings.
(Am J Manag Care. 2004;10:473-479)
Obtaining valid data describing processes andoutcomes of care is central to quality assessmentand improvement. Traditionally, suchdata could be obtained from a variety of sources includingadministrative databases, medical records, andpatient surveys. Administrative databases contain informationtypically collected for billing purposes or totrack utilization, including demographics, diagnoses,and procedure codes. Such databases allow cost-efficientstudy of large numbers of cases but lack the clinicallydetailed information available from medicalrecords.1-3 Increasingly though, clinically detailed informationsuch as laboratory and vital signs data arebecoming incorporated into comprehensive informationsystems.3,4 The completeness and accuracy of thesedata systems are often in question.5,6 Consequently,assessment of their validity remains necessary.
Hypertension is an important condition whose treatmentis in need of quality improvement.7 It affects morethan 50 million Americans and more than 1 million veterans.8,9 Despite readily available, effective therapy forlowering blood pressure (BP) and preventing cardiovascularmorbidity and mortality8,10-12 most patients withhypertension have inadequate BP control.13-18 In the1999-2000 National Health and Nutrition ExaminationSurvey, 69% of patients with a diagnosis of hypertensionhad a BP reading greater than or equal to 140/90 mmHg.8 Further, several studies have shown that despitereported familiarity and agreement with national hypertensionguidelines, clinicians tolerate higher BPs thanare recommended.13,19-21
Improving hypertension care requires ongoingassessment. Unadjusted BP control is the only widelyused measure to assess hypertension care, used by boththe Health Plan Employer Data and Information Set(HEDIS) and the Department of Veterans Affairs (VA)performance review program.22,23 Unlike many otherperformance indicators that involve first examiningautomated data and then reviewing the medical chart ifdata are not available, hypertension assessment has traditionallyrelied solely on chart review.24 This is at leastin part because BP data may be recorded by severalindividuals. In most ambulatory clinics, a nurse takesan initial BP and documents this reading in an intakenote. Clinicians may then do additional BP measurements,which are documented in their medical notes.For those settings with computerized patient recordsystems (CPRS), the initial vital signs informationrecorded by the nurse also may be entered into a separatedata field of the record, which may then be readilyextracted. In previous work, we found that only 1 BPreading was taken at most visits, which usually waspresent in the nurse's intake note.14 In a setting wherethis information is entered directly into the computerizedrecord, it is unknown how much information wouldbe lost by examining only these automated vital signsdata, and whether using only these data would impactquality measurement.
The current study compares the availability andagreement of BP measurements from an extractabledata field of the CPRS with BP measurements obtainedfrom clinicians' notes. We address the following 3questions:
Lessons learned from our experience may be usefulto other researchers and individuals interested in measuringhealthcare quality.
This is a retrospective cohort study that analyzed VAdatabases. The VA, as the largest integrated healthcaresystem in the United States, provides care to more than4 million veterans and is considered to be a leader inestablishing "a multifunctional integrated electronicmedical record system."25-27
Study Subjects and Sites
We identified individuals with hypertension who werereceiving regular outpatient medical care at 10 VA sitesacross the country during 1999. (A site comprises a hospital-based outpatient clinic and associated community-basedoutpatient clinics.) Selected sites had beenentering BP measurements into a separate vital signs fileof the CPRS and using electronic clinicians' notes formedical clinics, both as of at least January 1, 1999.
International Classification of Diseases, NinthRevision, Clinical Modification [ICD-9-CM]
We used a national administrative VA database, theOutPatient Clinic file, to identify eligible subjects. Tobe eligible, patients needed to have at least 1OutPatient Clinic—listed hypertension diagnosis( code 401,402 or 405) in 1998 and to be regular VA users (ie, ≥2OutPatient Clinic—listed medical clinic visits at least 6months apart in 1999). The study sample was randomlyselected from among all eligible patients stratified bysite. We sought 100 patients per site and achieved afinal sample size of 981.
Data Collection and Sources
We used the VA's CPRS, known as the VeteransHealth Information Systems and Technology Architecture,and the OutPatient Clinic file. The VeteransHealth Information Systems and Technology Architecture,which is maintained at the hospital within asite, contains multiple files, including those with clinicaldata such as vital signs, laboratory and radiologictest results, pharmacy data, problem lists, and providernotes. It also contains an administrative-encounter filewith diagnoses and procedures from all clinic visits thatis transferred to a central VA data repository in Austin,Texas, and incorporated into the OutPatient Clinicfile.28 At the clinic level, BP measurements usually aretaken by a nurse and either directly entered into a separatevital signs file with structured data entry fields inthe CPRS, or reported on encounter forms and thenentered by a clerk into this file. Additional BP data maybe available through provider notes, which are eitherdictated and transcribed or typed directly into theprovider notes file of the CPRS. (These clinical files arenot yet routinely transferred centrally.)
Study data were collected during the 12 months in1999. For automated data, patient demographics, —coded diagnoses, and medical clinic visit dateswere obtained from the OutPatient Clinic file; BP measurementswere extracted from the CPRS vital signs file.Vital signs file data were merged with OutPatient Clinicvisit information such that a visit was assigned to eachBP recording. For dates with multiple clinic visits, suchas visits to primary care and a general surgical clinic, weassigned all BP recordings to the medical clinic.
Clinicians' notes from all medical visits of selectedpatients were obtained by accessing each site's localintranet and printing a hard copy. (As mentioned, thefile containing these notes also is part of the CPRS, butthe information would not be considered automatedbecause it is free text.) An experienced nurse-abstractorthen extracted note information including visit type,date, and BP. Blood pressure information from nurses'intake notes or clinicians' note entries that used anobject template taken from the vital signs file wereignored because we were interested in whether the cliniciantook additional BP readings.
A 5% random sample of charts (a chart comprises allclinical notes on a given patient) was reviewed by one ofthe authors (A.M.B.) for interrater reliability. Observedagreement on the presence and value of all readings was96%. The only discrepancies found related to the presenceof a BP reading rather than to its value. Such discrepancieswere more likely to occur for patients whohad more than 6 visits and more than 5 BP readingsavailable, with one or the other reviewer missing anavailable BP.
Completeness of Automated Blood PressureInformation
. First, we determined whether BP measurementswere available in the CPRS vital signs file. Ourdenominator consisted of all OutPatient Clinic—identifiedmedical clinic visits in 1999. We examined the percentageof visits with at least 1 BP measurement, as wellas the percentage with 2 or more measurements. If multipleidentical BP values were found in the vital signs filefor the same day, we deleted duplicates.
Next, we examined how much additional BP informationwould be obtained by combining vital signs filedata and information available in clinicians' notes. Weused the same denominator and eliminated duplicatevalues in the notes. We examined the amount of informationlost by calculating the differences betweensources overall and by site.
Discrepancies Between Sources
. For visits with BPmeasurements available in both sources, we comparedthe number and value of BP recordings in the clinicians'notes with those in the vital signs file. We cross-tabulatedvisits by number of automated BP measurementsagainst the number of BP measurements from the correspondingvisit notes. Additionally, we checkedwhether each BP recording in the vital signs file had anexact match in the clinicians' notes, and noted the frequencyof visits at which this match occurred. Wechecked for a match using both individual BPs from agiven source and the average of available BPs.
We also examined whether BP documentation differencesbetween sources (both in terms of BP presenceand average BP value at a given visit) werebecause the BP was high and therefore the clinicianwas more likely to repeat the measurement and reportit in his or her note. We used the average BP in the vitalsigns file at a given visit and determined whether it washigh (≥140/90 mm Hg). We tested whether visits withvital signs file BPs versus those with BPs in bothsources were more or less likely to have a high BP readingby using the chi-square test. We similarly comparedthe BPs of matching and nonmatching visits.
Variation in Judgment of Blood Pressure Control.
We first examined differences in BP control (BP<140/90 mm Hg) at the individual visit level. We thendetermined patient-level control by calculating theaverage BP for each patient at his or her last visit of theyear for which a BP value was available. We calculatedthe percentage of patients with a BP less than 140/90mm Hg, examining results by source for the whole sampleand testing for site differences by using the chisquaretest. We again computed differences betweencombined sources and the vital signs file alone.
Baseline sample characteristics are presented inTable 1. The number of patients per site varied from 71to 103 because of differential adoption of electronicnotes by site. There were 6097 visits to primary care,medical subspecialty clinics, and urgent and emergentcare. Of these, 3987 were primary care visits; 629 weresubspecialty primary care visits (general internal medicine,geriatrics, women's clinic, hypertension, cardiology,spinal cord clinic); and 1481 were subspecialty,urgent care, or nursing visits.
How Complete Are the Data?
Sixty percent of all medical visits had at least 1 BPmeasurement in the vital signs file (Table 2).Combining automated (vital signs file) and cliniciannote information, 71% of visits had at least 1 recordedBP measurement (Table 2). Therefore, 11% of visits hada BP measurement in the clinicians' notes that was notin the vital signs file. By site, the amount of availableinformation lost using only the vital signs file variedfrom 1% to 35% ( < .0001; data not shown). Only 2% ofvisits had 2 or more BP measurements in the vital signsfile, whereas 15% of visits had at least 2 BP measurementsrecorded by combined sources. Thus, 13% of visitshad a second BP measurement in the clinicians'notes that was not in the vital signs file.
What Factors Are Associated WithDiscrepancies Between Sources?
A BP measurement was available in both the vitalsigns file and the clinicians' notes for 1361 visits.Seventy-nine percent (1070/1361) of these visits hadonly 1 BP measurement in each source. The BP measurementmatched exactly for 50% (678/1361) of thesevisits. Of these matching visits, 99% (674/678) had only1 BP measurement in each source.
Conversely, 50% (683/1361) of the BP measurementstaken during visits did not match exactly. Fiftyeightpercent (396/683) of these visits had only 1 BPmeasurement in each source. Of the 287 visits withmore than 1 BP measurement available in eithersource, 188 visits had 1 BP value that matched and 237visits had multiple BP measurements only in the clinicians'notes; 30 visits had multiple BPs only in the vitalsigns file.
Next we examined whether BP documentation differencesbetween sources were related to BP level. Of the4350 visits with a BP measurement in either source, 2316had a BP measurement only in the vital signs file. Ofthese 2316 visits, 57% had a high BP reading (≥140/90mm Hg), compared with 63% of visits with a BP measurementin both sources ( = .02). Thus, a clinician-notedBP value was more likely if the intake or vital signs file BPwas high. Of the 1361 visits with a BP measurement inboth sources, 78% of visits where the average BP valuesdid not match had a measurement indicating uncontrolledBP in the vital signs file only, comparedwith 48% of visits where the BPmeasurements did match ( < .0001). Thissuggests the clinician was more likely torepeat the BP measurement when theintake value was high, resulting in nonmatchingvalues, rather than just transcribethe intake BP (matching values).
How Do Judgments ofBlood Pressure Control VaryBased on the Data Source?
At the visit level, only 61 visits wouldhave been misclassified depending on thesource. At 48 visits, the BP would havebeen classified as uncontrolled according to the vitalsigns file, but would have been considered controlledwhen the combined source was used. At 13 visits, theBP would have been classified as controlled according tothe vital signs file, but would have been considereduncontrolled when the combined source was used.
At the patient level, using only automated data, theBP of 41% of the patients was controlled (see Table 3).Using both sources yielded similar results in terms ofoverall control and site rankings by percent control.Overall, 43% of patients had controlled BP, and the mosta site changed ranking was by 2 places (Table 3). Thus,the extra information provided in the notes changed theassessment of BP control for a given patient in fewerthan 2% of cases. In those few cases, the BP changedfrom uncontrolled to controlled.
If one assumed patients with missing BP measurementshad uncontrolled BP (≥140/90 mm Hg) at theirlast visit, this assumption made minimal difference tooverall results or site rankings, even for the site missingthe most data (10/101 patients), when just automatedinformation was used. Health Employer Data andInformation Set and the VA's performance review programuse the lowest available BP and assume the BP isuncontrolled if missing.22,23 Analyzing by both these criteriamade little difference to results (data not shown).
Current assessments of BP control rely largely onchart review and are therefore time-consuming and limitedin scope. If valid BP data were available in automatedform, this would make evaluations of BP controland quality of hypertension care more useful by encompassingmore cases and allowing more timely feedbackof information to providers, so that corrective actionswould be more likely.29
In the present study, we foundthat most BP data were availablein an automated form in the vitalsigns file of the VA's CPRS andthat most medical visits had only1 BP measurement availableregardless of source. Of the 22%of visits with BP values availablein both the automated data andthe clinician's notes, half the timethe BP in the clinician's note wasa duplicate of the vital signs fileBP, suggesting that the clinicianwas simply taking this informationfrom the vital signs file or thenurse's note and incorporating itinto his or her note. As expected,clinicians were more likely torepeat the BP measurement whenthe initial readings by nurseswere high, but this situation didnot occur very often. Most repeatmeasurements were not appreciablydifferent, and their inclusiondid not significantly affect judgmentsof control. Despite additionalBP values available in thenotes at 11% of visits, the percentageof patients with controlledBP did not change appreciably when comparingautomated data with automated data plus notes.
No other studies have attempted to validate automatedBP readings or other vital signs data in this way.One other study by Goldstein et al examined recordedBP values and assessments of control, although itsmethods were somewhat different.30 These researchersstudied chart BPs, comparing the BP in the initial noteby the nurse (which is comparable to the CPRS vitalsigns file) with BP measurements done by clinicians for350 patients participating in a hypertension interventionstudy, at 2 separate primary care visits at the PaloAlto, California, VA, a site also used in our study. TheBP was rechecked at 48% of visits where patients haduncontrolled BP and 38% of all visits. For approximately25% of visits, patients who had an initial uncontrolledBP reading had a controlled BP at clinician recheck.Given their findings, Goldstein et al argue for includingrepeat BP measurements in quality assessments.However, our data show that, despite this site havingthe highest percentage (26%) of visits with 2 or moreavailable BP measurements in the combined source, BPvalues at only 18 of 656 (2.7%) visits changed fromuncontrolled to controlled when clinician informationwas considered (data not shown). The study byGoldstein et al was presented as a meeting abstract, sofull details were not available. However, it is likely thatdissimilar methods account for the discrepant results.Further, our methods better reflect those of current performancereview programs.
Most of the available studies of automated data elementshave used claims data to examine the validity ofdiagnoses or process measures.24 Few have examinedthe use of automated clinical data for process or outcomemeasures, and only 1 other study has looked atBP. Kerr et al compared automated data from a centralVA diabetes registry with medical record data (bothelectronic and paper) with respect to diabetes qualitymeasures including the measurement and level of controlof BP, low-density lipoprotein (LDL) cholesterol,and glycosylated hemoglobin (HbA1c).27 They also investigatedwhether combining information from bothsources (compared with using either source alone)affected quality assessments for approximately 800 veteransreceiving diabetes care in 1999. They found lowerrates for all process measures using automated data,compared with either the medical record or with bothsources combined. Unlike our study, they found fewerBP measurements available in the automated data thanthe chart. For the process measure of the proportion ofpatients with a BP measurement in 1999, the respectiveproportions by source were 84%, 99%, and 99% for automateddata, medical record, and combined sources. Ifwe were to construct a similar measure, 98% of our sample'spatients would have a BP measurement accordingto the automated data, whereas only 33% would have aBP measurement based on clinician note data alone;when the sources are combined, 100% would have a BPmeasurement. These differing results may haveoccurred because Kerr et al collected the BP data at differenttime periods within sources and perhaps becausethey used a different, less complete, automated source.However, like our study, the Kerr et al study found thatoverall rates for outcome measures, including the percentageof patients with BP less than 140/90 mm Hg,LDL cholesterol less than 130 mg/dL, or HbA1c less than9.5%, were comparable regardless of source, althoughthey could not construct a combined BP control measurebecause BP data were measured at different timeperiods.
Thus, we are the first to examine an automated databasewith BP measurements and compare it with medicalnotes from the same time period. We found ityielded as much or more information than medical notereview and gave assessments of performance comparableto those of a combined measure using automateddata and notes.
Our data are already a few years old. However,increased automation and familiarity with the VA'sCPRS have occurred over this time period. Blood pressureinformation is now more likely to be entered intothe database and is more likely to be entered directly bythe nurse who took the measure, as opposed to a thirdparty. Thus, newer vital signs file data should be evenmore accurate and complete. (In this study, 673 visitshad a BP value only in the notes, which is unlikely tooccur in the present VA ambulatory clinic setting.)
There is no way to know the true reliability and accuracyof data entered into the vital signs file or clinicians'notes because we have no control over the measurement,documentation, and data entry process. This istrue for all information systems and medical records.
Site 6 had the fewest patients because of difficultyfinding patients with available electronic notes. Also atthis site, assessment of performance regarding BP controlvaried the most by source. It had the lowest percentageof BP values entered into the vital signs file andthe second highest percentage with BP values only inthe notes. This site was clearly behind the others in theadoption of the electronic record and also in the entryof BP measurements into the vital signs file. This likelyhas changed with time so that there will be less of a discrepancybetween the 2 sources.
Because we could not analyze by individual clinician,we do not know whether such assessments would varyby source. However, most visits were associated withonly 1 BP value in either source, with more visits havinginformation available in the vital signs file than inthe notes. With increased adoption of the vital signs file,we would expect that those visits where only 1 BP valuewas available only in the notes would now have thatinformation entered into the vital signs file. Therefore,clinician assessments should be consistent betweensources.
All healthcare systems face the challenge of developingeffective methods for assessing the quality of care. Inthe case of hypertension, such assessments requireaccurate BP information. Although this study onlyexamined VA data systems and settings, it is likely thatnon-VA clinicians behave similarly with regard to BPmeasurement. We believe that by implementing orenhancing existing medical record systems with similarextractable data fields, other healthcare organizationsalso may find that they are able to make more efficientdecisions about hypertension care. Moreover, such systemscould incorporate other clinical data fields thatcould be likewise extractable, making clinically detailedinformation more readily obtainable and facilitatingmonitoring of various quality indicators across manymedical conditions. These could include informationsuch as whether a pneumococcal vaccination was givenor a foot exam performed on a patient with diabetes.27
Current performance review programs collect clinicaldata from both clinicians' notes and automatedsources as available.22 Given the demonstrated completenessof automated BP data in the electronic record,we believe assessments of hypertension care can bemade based on these data alone, making such evaluationsmore efficient. Where effective databases do notcurrently exist, the VA's vital signs file is a prototypicalclinical computerized data system that could be easilyadopted by other settings.
We thank Elaine Czarnowski, RN, for data abstraction andMarshall Goff for editorial assistance.
From the Department of Health Services, Boston University School of Public Health,Boston, Mass (AMB, ATW, DRB); the Center for Health Quality, Outcomes and EconomicResearch, Bedford VAMC, Bedford, Mass (AMB, ATW, ECH, DRB); and the Section ofGeneral Internal Medicine, Boston Medical Center, and Boston University School ofMedicine, Boston, Mass (ASA, DRB).
This project was funded in part by the Department of Veterans Affairs Health ServicesResearch and Development Service (grant SDR 99-300-1).
Address correspondence to: Ann M. Borzecki, MD, MPH, CHQOER, Bedford VAMC(152), 200 Springs Rd, Bedford, MA 01730. E-mail: email@example.com.
Am J Public Health.
1. Fisher ES, Whaley FS, Krushat WM, et al. The accuracy of Medicare's hospitalclaims data: progress has been made, but problems remain. 1992;82:243-248.
2. Kashner TM. Agreement between administrative data files and written medicalrecords. . 1998;36:1324-1336.
Ann Intern Med
3. Iezzoni LI. Assessing quality using administrative data. . 1997;127:666-674.
Ann Intern Med
4. Jollis JG, Ancukiewicz M, DeLong ER, et al. Discordance of databases designedfor claims payment versus clinical information systems. . 1993;119:844-850.
5. Halpern J. The measurement of quality of care in the Veterans Health Administration.. 1996;34 (3 suppl):MS55-MS68.
J AmMed Inform Assoc.
6. Arts DGT, De Keizer NF, Scheffer GJ. Defining and improving data quality inmedical registries: a literature review, case study and generic framework. 2002;9:600-611.
N EnglJ Med.
7. Chobanian AV. Control of hypertension – an important national priority. 2001;345(7):534-535.
8. Hajjar I, Kotchen TA. Trends in prevalence, awareness, treatment, and controlof hypertension in the United States, 1988-2000. . 2003;290:199-206.
9. Yu W, Ravelo AL, Wagner TH, et al. The cost of common chronic diseases inthe VA health care system. Presented as an abstract at: 20th Annual Meeting of theDepartment of Veterans Affairs Health Services Research and DevelopmentService; February 13-15, 2002; Washington, DC.
10. SHEP Cooperative Research Group. Prevention of stroke by antihypertensivedrug treatment in older persons with isolated systolic hypertension: final results of theSystolic Hypertension in the Elderly Program (SHEP). . 1991;265:3255-3264.
11. Amery A, Birkenhager W, Brixko P, et al. Mortality and morbidity results fromthe European Working Party on High Blood Pressure in the Elderly trial. 1985;1:1349-1354.
12. Collins R, Peto R, MacMahon S, et al. Blood pressure, stroke, and coronaryheart disease, part 2: short-term reductions in blood pressure: overview ofrandomized drug trials in their epidemiological context. . 1990;335:827-838.
N Engl J Med
13. Berlowitz DR, Ash AS, Hickey EC, et al. Inadequate management of bloodpressure in a hypertensive population. . 1998;339:1957-1962.
Arch Intern Med.
14. Borzecki AM, Wong AT, Hickey EC, et al. Hypertension control: how wellare we doing? 2003;163:2705-2711.
Br J Gen Pract.
15. Frijling BD, Spies TH, Lobo CM, et al. Blood pressure control in treatedhypertensive patients: clinical performance of general practitioners. 2001;51:9-14.
16. Joffres MR, Hamet P, Rabkin SW, et al. Prevalence, control and awareness ofhigh blood pressure among Canadian adults. . 1992;146:1997-2005.
17. Colhoun HM, Dong W, Poulter NR. Blood pressure screening, managementand control in England: results from the health survey for England 1994. 1998;16:747-752.
18. Meissner I, Whisnant JP, Sheps SG, et al. Detection and control of high bloodpressure in the community. Do we need a wake-up call? . 1999;34:466-471.
Arch Intern Med.
19. Oliveria SA, Lapuerta P, Mccarthy BD, et al. Physician-related barriers to theeffective management of uncontrolled hypertension. 2002;162:413-420.
Arch Intern Med.
20. Hyman DJ, Pavlik VN. Self-reported hypertension treatment practices amongprimary care physicians. 2000;160:2281-2286.
J Gerontol Med Sci
21. Hajjar I, Miller K, Hirth V. Age-related bias in the management of hypertension:a national survey of physicians' opinions on hypertension in elderly adults.. 2002;57A:M487-M491.
22. Sennett C. Implementing the new HEDIS hypertension performance measure.. 2000;9(4 suppl):2-17.
23. Veterans Health Administration/Department of Defense. FY 2002 VHAPerformance Measurement System Technical Manual. Washington, DC: TheOffice of Performance and Quality, Veterans Health Administration; March 2002.Available at: http://www.oqp.med.va.gov/cpg/ HTN/P/HTN_3_8_02_techman.doc.Accessed December 20, 2003.
24. Dresser MVB, Feingold L, Rosenbranz SL, Coltin KL. Clinical quality measurement:comparing chart review and automated methodologies. .1997;35:539-552.
Am J ManagCare
25. Ashton CM, Septimus J, Petersen NJ, et al. Healthcare use by veterans treatedfor diabetes mellitus in the Veterans Affairs medical care system. . 2003;9:145-150.
Am J Med Qual
26. Kizer KW. The "new VA": a national laboratory for health care quality management.. 1999;14:3-20.
Jt Comm JQual Improv.
27. Kerr EA, Smith DM, Hogan MM, et al. Comparing clinical automated, medicalrecord, and hybrid data sources for diabetes quality measures. 2002;28:555-565.
28. VA Information Resource Center. Toolkit for new users of national VA data atthe Austin Automation Center. Available at: http://www.virec.research.med.va.gov/Support/Training-NewUsersToolkit/Toolkit.htm Accessed December 20, 2003.
29. Mugford M, Banfield P, O'Hanlon M. Effects of feedback of information onclinical practice: a review. . 1991;303:398-402.
30. Goldstein MK, Hoffman BB, Coleman RW, et al. Differences between initialand repeat blood pressure measurements during the same clinic visit in primarycare practice. Presented as an abstract at: 19th Annual Meeting of the Departmentof Veterans Affairs Health Services Research and Development Service; February14-16, 2001; Washington, DC.