We examined the impact of electronic reminders followed by performance reports and financial incentives. Physicians responded more to reports and incentives than to reminders alone.
Objectives: To evaluate the effects of a multifaceted quality improvement intervention during 2 time periods on 4 coronary artery disease [CAD] measures in 4 primary care practices. During the first phase, electronic reminders prompted physicians to order indicated medications or record contraindications and refusals (exceptions). In the second phase, physicians also received reports about their performance (including lists of patients not satisfying these measures), and financial incentives were announced.
Study Design: Time series analysis.
Methods: Adult CAD patients seen within the preceding 18 months were included. The primary outcome was the performance on each measure (proportion of eligible patients satisfying each measure after removing those with exceptions). Secondary outcomes were the proportion with the medication on their medication list, and the proportion with exceptions.
Results: Median performance at baseline was 78.8% for antiplatelet treatment, 85.1% for statin treatment, 77.0% for beta-blocker after myocardial infarction (MI), and 67.1% for angiotensinconverting enzyme inhibitor or angiotensin receptor blocker after MI. Performance improved slightly for 3 measures during the first phase and improved more substantially for all 4 measures during the second phase. For 3 of 4 measures, however, documentation of exceptions increased but not medication prescribing. Most exceptions were judged to be appropriate by peer review.
Conclusions: Physicians responded more to the combination of feedback and financial incentives than they had to electronic reminders alone. High performance was only achieved for 1 of 4 measures and recording of exceptions rather than increases in medication prescribing accounted for most of the observed improvements.
(Am J Manag Care. 2012;18(10):603-610)We examined the sequential implementation of commonly used quality improvement techniques to improve outpatient coronary heart disease care in 4 practices.
Quality improvement techniques that leverage an electronic health record (EHR) have been shown to improve care in many cases.1 However, EHR-based quality improvement has not been universally successful, and even in many instances where study results were positive, the magnitude of the improvement was small.2-4 Furthermore, observational data do not suggest that simply having an EHR improves quality in outpatient settings.5-7 In contrast, we have shown in the UPQUAL study (Utilizing Precision Performance Measurement for Focused Quality Improvement) that interconnected EHR-based tools can improve quality for multiple process of care measures in a large urban, single-site, university-affiliated practice.8 This intervention was designed to improve quality measurement (including capture of contraindications and patient refusals), make point-of-care reminders more accurate, and provide more valid and responsive feedback to clinicians (including lists of patients not receiving essential medications).
In this current study, we applied these principles—improve quality measurement in order to enable more accurate point-of-care reminders and feedback—to coronary artery disease (CAD) care in 4 suburban primary care group practices (2 family medicine and 2 internal medicine) that belong to the same health system and use the same EHR. We selected CAD care as our study objective because it is a common and important chronic disease and because implementing CAD measures in this setting was more feasible than several other candidate chronic disease and prevention topics. In this health system, point-of-care reminders were implemented first in July 2008 (Phase 1). Starting September to November 2009 (Phase 2), feedback was given to physicians and the medical group publicized to physicians that financial incentives would be tied to performance measures (including the 4 measures studied here). Both the reminders and physician feedback portions of the interventions were planned prior to Phase 1 by the study team. The financial incentives were initiated independently by leadership in the organization that was not directly associated with this study. This sequential implementation provides an opportunity to observe the additional effects of adding the combination of feedback and announcing financial incentives to electronic reminders on measured performance.
METHODSSetting and Eligible Patients
We performed this study at 4 primary care practices in the northern suburbs of Chicago, Illinois, that use the same commercial EHR (EpicCare, Epic Systems Corporation, Verona, Wisconsin). Northwestern University’s and Northshore University HealthSystem’s institutional review boards approved the study. All patients eligible for 1 or more quality measures cared for by 33 attending physicians (10 family medicine and 23 internal medicine) and 15 family medicine resident physicians were included. The practices had used the EHR for 5 years before the start of the first intervention examined in the study.
Sequential Implementation of Quality Improvement TechniquesMeasure Selection
We selected for consideration 4 measures of CAD care quality that were based on national measures: antiplatelet drug and lipid-lowering drug treatment in all patients with CAD, beta-blocker use in patients with prior myocardial infarction (MI), and angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) for diabetes or left ventricular systolic dysfunction.9 Health system clinicians, including cardiologists, discussed and modified the measures for local use, changing the lipid-lowering drug measure to statin treatment in CAD and changing the ACE inhibitor or ARB measure to apply to patients with prior MI only.
Electronic Clinical Decision Support (Phase 1)
Prior to these interventions, there were no other clinical decision support tools in use that addressed these topics. We added electronic point-of-care reminders that appeared during patient encounters when an apparently eligible patient did not have an indicated medication on their current medication list and had no exception recorded. These alerts were minimally intrusive (they did not interrupt clinicians’ work flow and the alert was indicated only by a single yellow highlighted tab that appeared on the left side of the screen when any clinical reminder criteria were present, and physicians had to select this tab to see the individual reminders). These alerts were displayed using existing EpicCare functionality. These electronic reminders included standardized ways to capture patient reasons (eg, refusals) or medical reasons that were exceptions for individual reminders within the reminder system of the EHR. These reminders were implemented in July 2008. We sent physicians educational e-mails with brief training materials to introduce the new alerts and to show how to record patient or medical exceptions.
Implementation of Feedback Reports and Announcement of Incentives (Phase 2)
Starting in September 2009, on a monthly basis, we gave physicians printed reports indicating their overall performance on each of the 4 measures for all their eligible patients and lists of individual patients who appeared to be eligible for an indicated medication but were not receiving it and had no exception recorded.
In October and November of 2009, the medical group leadership announced to physicians that a small portion of their compensation (1.5% of total compensation, which constituted 25% of the incentive-based compensation) would be tied to their performance on quality metrics, including the 4 metrics covered in this study.
Evaluation and OutcomesMeasure Calculation
We retrospectively calculated the performance for the 4 CAD measures for each month from September 2007 through March 2011. At each time point, patients were eligible for a measure if they had an office visit with a physician from 1 of the 4 included practices during the preceding 18 months and had a qualifying International Classification of Diseases, Ninth Revision, Clinical Modification code used on their active problem list, past medical history, or as an encounter diagnosis. We used Structured Query Language to retrieve data from an enterprise data warehouse that contains data copied daily from the EHR. For each time point, all patients were classified for each measure for which they were eligible as: a) satisfied, b) did not satisfy but had an exception, or c) did not satisfy and had no documented exception. The primary outcome for each measure was calculated as the number of patients who satisfied the measure divided by the total number of eligible patients excluding those with an exception. As an equation, the primary outcome = number satisfied / [number eligible — number not satisfied with an exception]. We also analyzed separately for each measure the proportion of eligible patients who satisfied the measure (were given the medication) and the proportion of all eligible patients who did not satisfy the measure and had exceptions.
We performed peer review beginning in September 2009 of medical exceptions recorded in the EHR and continued the review process for exceptions entered within the first 10 months of the intervention. One physician reviewed medical records to collect the reason for the exception and additional clinical information needed to judge the validity of the exception. When the clinical reasoning was unclear, the peer reviewer would request clarification from the treating clinician. Two board-certified internists and 1 board-certified family medicine physician met regularly to review the exceptions and judged them as appropriate, inappropriate (including when no contraindication to the medication was evident on physician chart review or cases where clarifying information was requested from the primary care physician but was not provided), or of uncertain appropriateness by consensus. When a consensus was not reached or the appropriateness was uncertain, 1 physician reviewed the medical literature, requested advice from specialists when needed, and the group discussed the case again until consensus was reached. Practice physicians received e-mail or telephone feedback for medical exceptions that were judged to be inappropriate.
We used interrupted time series analysis to examine changes in the primary and secondary outcomes over time for 2 different interventions: before and after July 2008 (the transition between baseline and Phase 1), and before and after September 2009 (the transition between Phase 1 and Phase 2). We calculated the primary and secondary outcomes for each of the performance measures for each month from September 2007 through August 2010. A linear model was fit to each series including a continuous time variable, a dichotomous indicator of the intervention, and the interaction term of time and intervention as covariates. The individual data points used for each time period are depicted in the figures. Next, we determined the autoregressive order of the model residuals by minimizing Akaike’s information criterion.10 Finally, we fit a linear regression model with autoregressive errors (using the appropriate number of autoregressive parameters, if any were necessary) to each series. These fitted models were used to test statistical significance.11 To ensure model validity, we examined several residual diagnostics, the Jarque-Bera and the Shapiro-Wilk tests for normality of residuals, and normal Q-Q and autocorrelation plots.12-14 Analyses used SAS version 9.2 (SAS Institute Inc, Cary, North Carolina) and R software package version 2.13.1 (R Foundation for Statistical Computing, Vienna, Austria).
RESULTSPatients and Their Characteristics
The number of patients eligible for the coronary disease measures and their characteristics are provided in Table 1. The number of eligible patients increased over time from 779 CAD and 218 MI patients in October 2007 to 1099 CAD and 332 MI patients by October 2010. Their characteristics changed little during the 3 years we examined (Table 1).
Performance During Baseline Period
Median performance during the baseline period was 78.9% for antiplatelet treatment, 85.3% for statin treatment, 77.0% for beta-blocker after MI, and 67.2% for ACE inhibitor or ARB after MI (Table 2). Performance on the antiplatelet measure was increasing significantly during the baseline period. Performance on the other 3 measures did not change significantly during the baseline period (Figures 1 and 2).
Performance During Phase 1
During Phase 1, overall performance continued to increase for the antiplatelet measure at a rate that was similar to the rate of improvement observed during the baseline period. There were statistically significant, but very small, increases in overall performance for the statin and ACE inhibitor/ARB measures (Table 2 and Figure 1). However, there were no significant increases in the rates of patients given medication for these 2 measures, and there was a decrease in patients given beta-blockers during Phase 1 compared with baseline (Table 2 and Figure 2). Physicians recorded exceptions to these measures during Phase 1 for small percentages of eligible patients (Figure 3).
Performance During Phase 2
There were significant increases in measured performance for all 4 measures during Phase 2 compared with Phase 1 (Table 2 and Figure 1). For 3 of the 4 measures, there was an increase in the documentation of exceptions (Table 2 and Figure 3) but not medication prescribing (Figure 2). For the antiplatelet measure there was a significant increase in both medication prescribing and exception documentation (Table 2, Figures 2 and 3).
By March 2011, overall performance was 99.7%, 91.1%, 86.8%, and 78.9% for antiplatelet treatment, statin treatment, beta-blocker after MI, and ACE inhibitor or ARB after MI, respectively.
We performed peer review for 179 medical exceptions. For 13 of these, a physician recorded an exception even though the patient was given the medication (the physician both recorded an exception and ordered the medication). Exceptions of this kind would not be included in the performance calculation, since the numerator criteria were met. Of the remaining 166 exceptions, 145 (87.3%) were judged appropriate on peer review,19 (11.4%) had an inappropriate reason or had no reason found for not prescribing on peer review, and 2 (1.2%) were of uncertain appropriateness (eAppendix, available at www.ajmc.com).
We used time series analysis to examine the effects of implementing electronic clinician reminders directed at 4 measures of outpatient coronary artery disease care followed later by physician audit and feedback, lists of individual patients not meeting the measures, and the announcement of financial incentives. We made several observations that warrant discussion.
Introducing electronic reminders alone during Phase 1 had very little impact on measured quality. Physicians did interact with the clinical decision support (CDS) system to some degree, as evidenced by recording of exceptions using the CDS system. However, the extent of this usage was modest, and led to very small changes in measured quality. These findings are consistent with older studies of CDS aimed at changing outpatient provider prescribing behavior for coronary artery disease that found small effects, or no effect, from electronic clinician reminders.15-18
After the start of individualized feedback to physicians with lists of patients not satisfying the measures and the announcement that there would be financial incentives based on performance (Phase 2), there were much more prominent changes in performance. During this second intervention period, there was a sharp increase in physicians’ use of exception recording for all 4 measures as well as an increase in patients with antiplatelet medication recorded on their medication lists. As a result of these changes, there were significant increases in measured performance for all 4 measures. This observed change suggests that linking CDS tools to local accountability systems like incentives or providing feedback on performance can lead to greater physician engagement with EHR quality improvement tools than would occur with CDS alone. Prior studies have shown a positive effect of providing audit and feedback to physicians,19 and inconsistent effects of financial incentives to improve quality.20,21 Because performance feedback and the announcement of incentives occurred at nearly the same time in this healthcare system, we cannot distinguish the relative contributions of each to the observed changes. Future studies should examine whether adding financial incentives to performance feedback produces a greater effect on quality than feedback alone. This distinction is important since the long-term sustainability of financial incentives requires that the organization commit some amount of financial resources to prioritizing quality over other behaviors such as clinical volume.
We do not know why these interventions did not appear to influence the 4 measures the same way. Both the drug prescribing and recording of exceptions increased for the antiplatelet drug measure. By the end of the study, overall performance for this measure approached 100% (86.6% had a qualifying drug on their medication list and another 13.2% had an exception recorded). The improvement observed during Phase 2 for the other 3 measures occurred exclusively because the recording of exceptions increased. The proportion of eligible patients with the drugs recorded on their medication lists did not improve; in 1 case, beta-blocker after MI, it declined. By the end of the study sizable numbers of patients remained who neither received the treatment nor had a recorded exception for 3 measures. We can speculate as to why this might be the case. Increasing aspirin prescribing for secondary CAD prevention may be easier to improve than other measures because aspirin is an over-the-counter medication that in some cases may not be recorded in the coded medication list in the EHR for patients for whom the medication is already prescribed. These interventions may have prompted physicians to record aspirin that was already documented in another part of the medical record on the medication list, which would be easier to do than starting a truly new prescription, as this would require contacting a patient or waiting until their next office visit. However, in our prior work, we observed through chart review that in approximately half of instances with newly documented aspirin prescriptions there was evidence that this represented a new prescription (unpublished data). It may be that aspirin prescribing is an easier behavior to change using quality improvement interventions. The prior study we performed at an urban academic primary care practice8 and another study of clinical reminders16 both showed that aspirin or antiplatelet prescribing for patients with CAD could be increased. In our prior study, however, lipid-lowering drug prescribing also increased following a similar combined intervention, and there was also a non-significant increase in beta-blocker prescribing.8 Changes in drug prescribing for these measures were not observed here. ACE or ARB prescribing achieved the lowest level of performance and had the highest proportion with recorded exceptions. This may have been due to the fact that local clinical leadership selected criteria for ACE or ARB use that differed from national performance metrics and clinical guidelines that suggest that ACE or ARB be used in patients with CAD and diabetes or left ventricular systolic dysfunction rather than CAD and prior MI.9,22,23
One possible reason drug prescribing did not improve may be that physicians had valid medical reasons for not prescribing these medications for many patients. In the Physicians Advancing Health Information Technology to Improve Cardiovascular Care (Cardio-HIT) study, 5 cardiology and primary care practices submitted electronic data for 4 measures similar to the ones employed here. Performance, calculated in a similar fashion to the way we did, ranged from 69% for beta-blocker after MI to 80% for antiplatelet therapy. These rates are similar to what was observed at the start of our observation period. In Cardio-HIT, in the instances where there appeared to be a quality failure based on quality measurement using coded electronic data sources, after chart review, only 25.4% had an actual quality failure because the rest either had an exception, or had the drug prescribed but not recorded on the EHR medication list.24 The findings of the peer review of medical exceptions performed in the present study, in our prior study,25 and in the Cardio-HIT study24 suggest that most recorded medical exceptions represented legitimate medical reasons for not using a treatment in these 3 study populations. Since we did not perform chart reviews for patients who had apparent quality deficits remaining, we do not know what proportion truly had quality deficits.
This study has several additional limitations. This was not a controlled trial, and other factors occurring contemporaneously may have influenced our findings. We attempted to identify and include in this report other changes taking place within this healthcare system that may have influenced performance on these measures, but may not have accounted for all of them. We cannot separate out the effect of the feedback reports from the impact of the announcement that financial incentives linked to performance would be going into place. While this was performed within 1 health system, the study included both internal medicine and family medicine primary care providers practicing in several geographic locations. Still, the generalizability of these findings to other settings or provider groups is not known.
Implementation of CDS in the form of point-of-care reminders alone had only small effects on outpatient CAD quality measures. After monthly provider feedback was added and financial incentives were announced, there was a sharp increase in the use of the CDS system to record exceptions. However, increased drug prescribing was only observed for 1 of 4 measures. For 3 measures, sizable groups of patients remained who neither had the medication prescribed nor an exception recorded. Until it becomes a professional norm that physicians expect themselves and their colleagues to either provide recommended treatments or record in a visible fashion when this is not possible, apparent quality gaps are likely to remain.Acknowledgments
We would like to acknowledge Lourdes Link, project manager in the Department of Health Information Technology, and Erin Duval, quality manager in the Quality Improvement Department, Northshore University Health System, Evanston, IL, for their valuable contributions to this project and the valuable support of Angela Bicos, MD, Edward Blumen, MD, Michael Dowling, MD, David Holub, MD, and William Seiden, MD, who served as site lead physicians for this study.
Author Affiliations: From Division of General Internal Medicine (SDP, NCD, EMF, JYL, DWB), Institute for Healthcare Studies (SDP, DWB), Feinberg School of Medicine, Northwestern University, Chicago, IL; Northwestern Medical Faculty Foundation (SDP, NCD, DK, DWB), Chicago, IL; Department of Medicine (JK), Health Information Technology (SL), Northshore University Health System, Evanston, IL; Department of Family Medicine (TG), University of Illinois, Chicago, IL.
Funding Source: Grant 1R18HS17163-01, Agency for Healthcare Research and Quality. Dr Persell was supported by career development award 1K08HS015647-01 from the Agency for Healthcare Research and Quality.
Author Disclosures: The authors (SDP, JK, TG, NCD, SL, DK, EMF, JYL, DWB) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (SDP, JK, NCD, DK, DWB); acquisition of data (SDP, JK, TG, NCD, SL, DK, EMF, DWB); analysis and interpretation of data (SDP, JK, TG, JYL, DWB); drafting of the manuscript (SDP, DWB); critical revision of the manuscript for important intellectual content (JK, TG, NCD, SL, DK, EMF, JYL, DWB); statistical analysis (SDP, JYL); provision of study materials or patients (JK, TG); obtaining funding (DWB); administrative, technical, or logistic support (JK, SL, EMF); and supervision (SDP, TG, DWB).
Address correspondence to: Stephen D. Persell, MD, MPH, Assistant Professor, Division of General Internal Medicine, Feinberg School of Medicine, Institute for Healthcare Studies, Northwestern University, 750 N Lake Shore Dr, 10th Fl, Chicago, IL 60611. E-mail: email@example.com. Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff (Millwood). 2011;30(3):464-471.
2. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742-752.
3. Goldzweig CL, Towfigh A, Maglione M, Shekelle PG. Costs and benefits of health information technology: new trends from the literature. Health Aff (Millwood). 2009;28(2):w282-w293.
4. O’Connor PJ, Sperl-Hillen JM, Rush WA, et al. Impact of electronic health record clinical decision support on diabetes care: a randomized trial. Ann Fam Med. 2011;9(1):12-21.
5. Linder JA, Ma J, Bates DW, Middleton B, Stafford RS. Electronic health record use and the quality of ambulatory care in the United States. Arch Intern Med. 2007;167(13):1400-1405.
6. Crosson JC, Ohman-Strickland PA, Hahn KA, et al. Electronic medical records and diabetes quality of care: results from a sample of family medicine practices. Ann Fam Med. 2007;5(3):209-215.
7. Romano MJ, Stafford RS. Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Arch Intern Med. 2011;171(10):897-903.
8. Persell SD, Kaiser D, Dolan NC, et al. Changes in performance after implementation of a multifaceted electronic-health-record-based quality improvement system. Med Care. 2011;49(2):117-125.
9. American College of Cardiology Foundation/American Heart Association/American Medical Association—Physician Consortium for Performance Improvement. Clinical performance measures: chronic stable coronary artery disease. Chicago, IL: American Medical Association; 2005.
10. Akaike H. A new look at the statistical model identification. IEEE Transactions on Automatic Control. 1974;19(6):716-723.
11. Trapletti A, Hornik K. tseries: Time Series Analysis and Computational Finance. R package version 0.10-18, 2009.
12. Jarque CM, Bera AK. Efficient tests for normality, homoscedasticity and serial independence of regression residuals. Economics Letters.1980;6(3):255-259.
13. Shapiro SS, Wilk MB. An analysis of variance test for normality(complete samples). Biometrika. 1966;52(3/4):591-611.
14. Wilk MB, Gnanadesikan R. Probability plotting methods for the analysis of data. Biometrika. 1968;55(1):1-17.
15. Tierney WM, Overhage JM, Murray MD, et al. Effects of computerized guidelines for managing heart disease in primary care: a randomized, controlled trial. J Gen Intern Med. 2003;18(12):967-976.
16. Sequist TD, Gandhi TK, Karson AS, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc. 2005;12(4):431-437.
17. Eccles M, McColl E, Steen N, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ. 2002;325(7370):941.
18. Demakis JG, Beauchamp C, Cull WL, et al. improving residents’ compliance with standards of ambulatory care: results from the VA Cooperative Study on Computerized Reminders. JAMA. 2000;284(11):1411-1416.
19. Jamtvedt G, Young JM, Kristoffersen DT, O’Brien MA, Oxman AD. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006(2):CD000259.
20. Petersen LA, Woodard LD, Urech T, Daw C, Sookanan S. Does pay-for-performance improve the quality of health care? Ann Intern Med. 2006;145(4):265-272.
21. Doran T, Kontopantelis E, Valderas JM, et al. Effect of financial incentives on incentivised and non-incentivised clinical activities: longitudinal analysis of data from the UK Quality and Outcomes Framework. BMJ. 2011;342:d3590.
22. Drozda J Jr, Messer JV, Spertus J, et al. ACCF/AHA/AMA-PCPI 2011 Performance Measures for Adults With Coronary Artery Disease and Hypertension: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Performance Measures and the American Medical Association-Physician Consortium for Performance Improvement. Circulation. 2011;124(2):248-270.
23. Fraker T Jr, Fihn SD, Gibbons RJ, et al. 2007 chronic angina focused update of the ACC/AHA 2002 Guidelines for the management of patients with chronic stable angina: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines Writing Group to develop the focused update of the 2002 Guidelines for the management of patients with chronic stable angina. Circulation. 2007;116(23):2762-2772.
24. Kmetik KS, O’Toole MF, Bossley H, et al. Exceptions to outpatient quality measures for coronary artery disease in electronic health records. Ann Intern Med. 2011;154(4):227-234.
25. Persell SD, Dolan NC, Friesema EM, Thompson JA, Kaiser D, Baker DW. Frequency of inappropriate medical exceptions to quality measures. Ann Intern Med. 2010;152(4):225-231.