This is a comparative effectiveness study that evaluates the safety effects of 2 types of commercially available electronic prescribing systems.
Published Online: December 16, 2011
Rainu Kaushal, MD, MPH; Yolanda Barron, MS; and Erika L. Abramson, MD, MS
Objectives: The increasingly widespread adoption of electronic health records (EHRs) is substantially changing the American healthcare delivery system. Differences in the actual effectiveness of EHRs and their component applications, including electronic prescribing (e-prescribing), is not well understood. We compared the effects of 2 types of e-prescribing systems on medication safety as an example of how comparative effectiveness research (CER) can be applied to the study of healthcare delivery.
Study Design and Methods: We previously conducted 2 non-randomized, prospective studies with pre–post controls comparing prescribing errors among: (1) providers who adopted a standalone e-prescribing system with robust technical and clinical decision support (CDS) and (2) providers who adopted an EHR with integrated e-prescribing with less robust available CDS and technical support. Both studies evaluated small groups of ambulatory care providers in the same New York community using identical methodology including prescription and chart reviews. We undertook this comparative effectiveness study to directly compare prescribing error rates among the 2 groups of e-prescribing adopters.
Results: The stand-alone system reduced error rates from 42.5 to 6.6 errors per 100 prescriptions (P <.001). The integrated system reduced error rates from 26.0 to 16.0 per 100 prescriptions (P= .07). After adjusting for baseline differences, stand-alone users had a 4-fold lower rate of errors at 1 year (P <.001).
Conclusions: Despite improved work flow integration, the integrated e-prescribing application performed less well, likely due to differences in available CDS and technical resources. Results from this small study highlight the importance of CER that directly compares components of healthcare delivery.
(Am J Manag Care. 2011;17(12 Spec No.):SP88-SP94)
Comparative effectiveness research (CER) can be expanded beyond its usual focus on treatment or intervention options to evaluate healthcare delivery systems, including electronic health records (EHRs) and electronic prescribing (e-prescribing).
Our small CER study of ambulatory providers found that use of a stand-alone e-prescribing application led to a greater reduction in prescribing errors than use of an integrated EHR with e-prescribing.
System features, as well as implementation and training resources, likely contributed to these findings.
Given the large national investment in EHRs, future CER research should be conducted to understand the actual effects of EHR systems in active use.
The Health Information Technology for Economic and Clinical Health (HITECH) Act is providing up to $30 billion for the use of interoperable electronic health records (EHRs) in meaningful ways that improve healthcare quality, safety, and efficiency.1 Specific measures will be used to determine if a provider meets criteria for incentives, including electronic prescribing (e-prescribing).2 To decrease intersystem variability, providers must use a certified EHR, including eprescribing that meets certain standards. However, even among certified EHRs, differences in performance arise in actual system use.
Comparative effectiveness research (CER) is designed to inform healthcare decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options and interventions.3 Traditionally, CER has directly compared the outcomes of 2 therapeutic interventions, most commonly medications, using randomized controlled trials. To date, there has been little CER evaluating different health information technology (HIT) in actual use.4
Previously, using identical methodology, we conducted 2 separate prospective studies comparing the effects of 2 types of e-prescribing systems on medication safety.5,6 The systems were implemented and used by ambulatory providers in a single community but had important differences. The first was a stand-alone system with advanced clinical decision support (CDS) and extensive technical support. The second was an e-prescribing application integrated within an EHR with less robust available CDS and more limited technical support. These studies provided an ideal opportunity to perform a novel comparative effectiveness study of these 2 system types. Our hypothesis was that despite technical and CDS limitations, the integrated application would better improve medication safety due to improved work flow integration and increased diversity of patient data from the EHR available to the prescriber at the point of care.
In this study, we compare the effectiveness of a commercially available stand-alone e-prescribing system to an e-prescribing system integrated within an EHR by analyzing paper prescriptions at baseline and e-prescriptions 1 year later for 21 ambulatory care providers in the same New York community. The data were obtained from 2 previous pre–post studies evaluating prescribing safety among e-prescribing adopters compared with control providers who used handwritten prescriptions. 5,6 The same group of 15 non-adopters served as concurrent controls for both original studies and, in each case, prescribing errors were high at baseline for all study participants but were significantly higher for non-adopters at 1 year. We obtained institutional review board approval from Weill Cornell Medical College, and providers gave written informed consent.
The Institute of Medicine classifies medication errors as any error in the medication use process (prescribing, transcribing, dispensing, administering, and monitoring).7 We focused only on prescribing errors, such as omitting quantity to be dispensed. Near misses were prescribing errors with potential for harm that were either intercepted or reached the patient but did not cause harm. An example was prescribing penicillin for a patient with a known allergy who did not receive the medication because of pharmacist error detection. Adverse drug events (ADEs) were injuries from a medication, a subset of which was preventable. Rule violations were departures from strict standards of prescribing that were well understood and unlikely to cause harm, such as failure to write “po” for a medication only taken orally. These were not included in error rates but were counted, as they can result in significant rework.
We studied 11 adult primary care practices in a predominantly rural and suburban region of New York State between September 2005 and July 2008. Fifteen providers from 6 different practices adopted a stand-alone system and 6 providers from 5 practices adopted an integrated system. Physicians were members of the not-for-profit independent practice association. All members were sent a letter in May 2005 detailing incentives for adopting e-prescribing and inviting them to participate in a research study. Discounts on EHR licenses were provided as an incentive. All practices ranged in size from 1 to 7 providers and none were academically affiliated.
Stand-alone E-prescribing System
The stand-alone system was a Web-based, commercially available system. Providers had access to an electronic reference guide for dosing recommendations, medication lists, and allergies, if this information had been entered. The system provided CDS alerts for drug allergies, drug–drug interactions, duplicate drug therapies, incorrect drug frequencies, incorrect dosing, and pregnancy and breast-feeding contraindications. The system checked for insurance eligibility and formulary compliance. Prescriptions could be sent to pharmacies electronically. This system was ultimately not certified, as it was not integrated within an EHR. Providers performed other clinical documentation on paper.
E-prescribing System Integrated Within an EHR
The integrated system was a commercially available system fully integrated within an EHR. Providers had electronic access to all information in the EHR including patient history, medications, allergies, diagnoses, and laboratory and demographic data. The system provided the same types of alerts as the stand-alone system but additionally provided drug–disease interactions and disease-specific drug recommendations. Patient insurance eligibility and formulary compliance were also checked and prescriptions could be sent electronically to pharmacies. Notably, the e-prescribing module was a more immature module due to limited configuration of the CDS at the time of the study. CDS for e-prescribing was fully configured after the study’s completion.
A for-profit Health Information Service Provider (HSP) provided implementation and ongoing technical support, including routine monitoring of e-prescribing compliance to encourage 100% use.8 Because the HSP was new to providing implementation services at the time the integrated practices went live, those providers received far less training initially (on average 1 hour) compared with stand-alone users (on average 40 hours), who went live later.
Data Collection and Review
Prescription Collection. We collected carbon copies at baseline and electronic downloads of all prescriptions written by providers during a 2-week period at 1 year. During both time periods, we obtained a minimum of 75 prescriptions on at least 25 patients per provider, extending data collection if necessary, and limited review to 3 prescriptions per patient to minimize clustering of errors. Non-duplicate prescription pads were removed at baseline to ensure compliance with use of the duplicate prescription pads.
Prescription Review. A research nurse reviewed each prescription in an identical manner guided by extensively utilized, standardized methodology.9-11 Training included review of error definitions, legibility assessments, and review of test and actual cases. Two data collectors jointly reviewed cases, after which reviewers worked independently. Reviewers classified prescribing errors, and rule violations, and evaluated the use of ADE-trigger drugs. Inappropriate abbreviation errors were from the Joint Commission on Accreditation of Healthcare Organizations’ “Do Not Use” list, established to denote abbreviations with great potential to cause medical errors.12
We determined interrater reliability by having 2 reviewers examine the same random sample of 2% of the data. Interrater agreement for overall and type of error was 1.0, indicating excellent agreement.
Chart Review. An ambulatory chart review was performed for suspected near misses or when a drug often used to treat an ADE was prescribed.
Physician Review and Classification. Two physicians blinded to the providers’ prescribing method independently reviewed all suspected near misses and ADEs. Interrater agreement for the presence of prescribing errors and near misses was 0.96 and 0.93, indicating excellent agreement.
Statistical Analysis. We compared error rates per 100 prescriptions of (1) stand-alone and integrated adopters at baseline, (2) stand-alone adopters at baseline and 1 year, (3) integrated adopters at baseline and 1 year, and (4) between stand-alone and integrated adopters at 1 year using mixedeffects Poisson regression models that included e-prescribing system type, study time, and an interaction term system type and study time. We adjusted for clustering at the provider level and assumed an independent correlation structure for these Poisson models. We calculated 95% Poisson confidence intervals (CIs) with cluster robust standard errors for the rates. We used SAS for PC version 9.2 (SAS Institute Inc, Cary, North Carolina) to estimate Kappa statistics, χ2, and t tests, and Stata 11 (StataCorp, College Station, Texas) to
estimate mixed-effects Poisson regression models and to calculate 95% Poisson CIs with clustered robust standard errors.
PDF is available on the last page.