Although some interventions may enhance medication safety, an electronic medical record reminder to providers may not be an efficient use of resources.
Objective: To test the efficiency and cost-effectiveness of interventions aimed at enhancing laboratory monitoring of medication.
Study Design: Cost-effectiveness analysis.
Methods: Patients of a not-for-profit, group-model HMO were randomized to 1 of 4 interventions: an electronic medical record reminder to the clinician, an automated voice message to patients, pharmacy-led outreach, or usual care. Patients were followed for 25 days to determine completion of all recommended baseline laboratory monitoring tests. We measured the rate of laboratory test completion and the cost-effectiveness of each intervention. Direct medical care costs to the HMO (repeated testing, extra visits, and intervention costs) were determined using trial data and a mix of other data sources.
Results: The average cost of patient contact was $5.45 in the pharmacy-led intervention, $7.00 in the electronic reminder intervention, and $4.64 in the automated voice message reminder intervention. The electronic medical record intervention was more costly and less effective than other methods. The automated voice message intervention had an incremental cost-effectiveness ratio (ICER) of $47 per additional completed case, and the pharmacy intervention had an ICER of $64 per additional completed case.
Conclusions: Using the data available to compare strategies to enhance baseline monitoring, direct clinician messaging was not an efficient use of resources. Depending on a decision maker’s willingness to pay, automated voice messaging and pharmacy-led efforts can be efficient choices to prompt therapeutic baseline monitoring, but direct clinician messaging is probably a less efficient use of resources.
(Am J Manag Care. 2009;15(5):281-289)
Patients of a not-for-profit, group-model HMO were randomized to 1 of 4 interventions: an electronic medical record reminder to the clinician, an automated voice message to patients, pharmacy-led outreach, or usual care.
Recent studies indicate that laboratory monitoring of medications at initiation of therapy is below the level recommended by guidelines, with as many as 39% of patients not receiving recommended testing.1 Lack of monitoring is a concern because of potential adverse events (eg, hyperkalemia associated with inhibitors of angiotensin) and because of failure to achieve therapeutic benefit due to inadequate blood levels of medication. Additionally, failure to establish baseline levels makes it difficult to determine a patient’s trends in laboratory values.
These concerns regarding patient safety and clinical effectiveness have led researchers to test methods to enhance laboratory-based medication monitoring. Several types of interventions (including pharmacyled efforts, electronic reminders to clinicians, and automated telephone call reminders to patients) to improve laboratory monitoring of medications at therapy initiation have been effective in randomized trials.1,2 Although all these interventions improve monitoring, the most efficient intervention methods are not clear, and no economic analyses have been done to inform policy makers in this area. Some efforts may be particularly resource intensive, but could be worth the added expenditure when the potential adverse outcome is severe. Without careful analysis of the balance between costs and benefits, one cannot determine which (if any) interventions ought to be funded by healthcare payers. The efficiency of alternative approaches to therapeutic monitoring is of growing importance to healthcare providers as this monitoring is now a focus of quality measurement.3 To help with decision making regarding laboratory-monitoring interventions, we undertook a preplanned costeffectiveness analysis of a randomized trial that tested several interventions aimed at enhancing laboratory monitoring of medication.2
Complete details of the trial design are available elsewhere.2 The study was conducted at a not-for-profit, group-model HMO and was approved by its institutional review board. All HMO patients who had not received baseline laboratory tests (defined as within 6 months before or 5 days after a newly dispensed study medication) were randomized to 1 of 4 conditions: an electronic medical record reminder to the patient’s primary care provider (EMR arm), an automated voice message to patients (AVM), pharmacy team outreach (Pharmacy), or usual care (UC) (Figure 1). In the EMR intervention, a patient-specific electronic message was sent to the primary care clinician from the chair of the HMO’s patient safety committee stating that computer records indicated the patient had received a new medication, that laboratory monitoring was recommended, and the patient had not received the test(s) between 6 months before and 5 days after the dispensing. The message referenced internal and external guideline resources, recommended specific tests, and provided a sample letter the clinician could send to the patient. The AVM intervention included telephone messages advising the patient that laboratory tests were required for a medication the patient had received; the patient was advised that the testing had been ordered and could be completed at any HMO laboratory. The Pharmacy intervention began with a telephone call from a nurse in the pharmacy department to the patient to encourage laboratory testing. If the nurse successfully contacted the patient, a follow-up letter reminded the patient to obtain the laboratory test(s). If telephone contact was not successful, the nurse sent a letter suggesting that the patient go in for testing. If patients had questions or concerns about their medication during the contacts, a pharmacist was available for consultation.
Study medications (and lab tests required) were angiotensinconverting enzyme inhibitors or angiotensin receptor blockers (serum creatinine, serum potassium), allopurinol (serum creatinine), carbamazepine (aspartate aminotransferase [AST] or alanine aminotransferase [ALT], complete blood count, serum sodium), diuretics (serum creatinine, serum potassium), metformin (serum creatinine), phenytoin (AST/ALT, complete blood count), pioglitazone (AST/ALT), potassium supplements (serum potassium, serum creatinine), statins (AST/ALT), and terbinafine (AST/ALT, serum creatinine). The primary outcome was laboratory test completion, defined as the proportion of patients with all recommended baseline laboratory monitoring tests completed at 25 days after the intervention date. In the year before randomization, the laboratory-monitoring rates at the initiation of therapy (those who had initiated a study medication and had completed all recommended baseline laboratory testing) were similar in the study groups (about 60%). Other characteristics of the study groups also were similar, but the AVM group had a smaller proportion of female primary care physicians (24% vs ~40%). A total of 961 patients were included in the clinical trial. By day 25 after the intervention, 22.4% (53 of 237 patients) in the UC arm, 48.5% (95 of 196 patients) in the EMR arm, 66.3% (177 of 267 patients) in the AVM arm, and 82.0% (214 of 261 patients) in the Pharmacy arm all had completed recommended monitoring (P <.001). A total of 72 abnormal test results were found among the 961 patients (7.5%).
We followed best practice in economic evaluation as outlined by the US Public Health Service.4 Our economic analysis examined the incremental cost per additional case completed (defined as enrollees that had all guideline-specified laboratory tests completed) and the incremental cost per abnormal case detected. We calculated the incremental cost-effectiveness ratio (ICER) by dividing the difference in cost by the difference in cases completed (or abnormal cases detected); interventions with lower ICERs are a better value for the money. Interventions with a higher ICER also may be cost-effective, depending on a decision maker’s willingness to pay for each additional unit of effect. Interventions were ranked on cost, and dominated options (ie, more costly but less effective) were identified. To account for the uncertainty due to sampling variation in cost-effectiveness analysis, we plotted cost-effectiveness acceptability curves5; these curves show the probability of each intervention being cost-effective at a given willingness to pay for an additional completed case (or abnormal case detected). All analyses were conducted using STATA release 9.0 (StataCorp, College Station, TX) and Microsoft Excel 2003 (Microsoft, Redmond, WA).
Most of the cost data were collected directly from the trial, with some expert opinion based on formal data-gathering techniques as described below. The perspective of the analysis was the HMO. Thus, we included only direct medical care costs incurred by the HMO. The scope of the analysis included the costs within the 25 days after the intervention of (1) all recommended laboratory tests (including repeated testing), (2) extra visits associated with abnormal tests (validated from pharmacist chart review), and (3) performing the intervention. The analysis does not include potential offsets of poor outcomes averted (eg, lactic acidosis, liver toxicity) because those data were too sparse to answer those questions effectively. Also excluded from the analysis were development costs and patient costs, like travel time and copayments. To maintain consistency with the efficacy analysis, the primary outcome was the cost per completed case within 25 days of dispensing, with a secondary analysis of the cost per addition al enrollee with 1 or more abnormal laboratory tests within 25 days of dispensing.
Table 1 details the unit costs (ie, prices), activities, data sources, and resource assumptions used in the analysis. To improve generalizability to other systems, salary costs were taken from sources reflecting the prevailing wage rate in Portland, Oregon, with a fringe benefit rate of 30% and overhead rate of 20% added to fully allocate the costs. Laboratory testing costs come from the HMO’s laboratory accounting system and include patient intake, phlebotomy, testing, and reporting. Mailing costs were applied based on estimates for bulk mailing, and costs for clinic visits came from the HMO’s cost structure.
Resources used in the performance of all the interventions included chart review to ensure patient eligibility, tracking systems for patient follow-up, and noting the intervention delivery in the patient’s medical record. Additional tasks were study arm specific. In the Pharmacy arm they included time for mailings and outreach phone calls and their documentation. The AVM intervention required time to upload files. The cost of maintaining the automated telephone system was embedded in the vendor charge to the HMO. The EMR intervention costs included nurse time to send the message and clinician followup activities. Because existing EMR functionality was used to provide the EMR messages, no incremental programming resources were necessary to provide the intervention. The time (in minutes) taken to complete these tasks was recorded for a sample of patients in the Pharmacy, AVM, and EMR arms. To establish patient-level resource use, each patient in the appropriate arm was assigned an imputed value randomly from the sample, preserving the sample’s underlying (observed) distribution. Analyst time to create and maintain the patient lists from automated data were taken from the trial.
In the base-case analysis, formal interviews were undertaken with 13 HMO clinicians to estimate time (in minutes) for all aspects of responding to the EMR message: reading the message, ordering tests, and review and follow-up (for normal and abnormal results separately) on laboratory results. We then imputed patient-level resource use for these items by randomly selecting from a triangular distribution, using the expert’s estimates of the mode and range. Because they came from expert opinion, we undertook sensitivity analyses by assigning the low estimates and, separately, the high estimates of time required for ordering, review, and follow-up on normal and abnormal test results. Additionally, because of uncertainty regarding the frequency and type of contact for patients in the EMR arm (eg, telephone, letter, during visits), we undertook a sensitivity analysis that reduced the cost of contact in that arm.
Table 2 presents the resources used per patient by arm (ie, unit costs from Table 1 multiplied by actual units of use). The increase in laboratory testing associated with each of the interventions is evident from changes in the average cost of testing ($18.65 for UC, vs $26.69, $32.44, and $40.93 for EMR, AVM, and Pharmacy, respectively); the cost for review and follow-up of both normal and abnormal test results followed a similar pattern. The average cost of patient contact (including chart review, notification tracking, and charting) was $5.45 in the Pharmacy arm. The EMR arm used about $7 ($3.18 + $3.77) for patient contact, while the AVM arm cost for patient contact came to $4.64 ($3.55 + $1.09).
Table 3 shows that enhanced effectiveness comes with increases in total cost. Pharmacy, the intervention with the greatest proportion of completed cases at 82 per 100 patients, is also the most expensive at $5160 per 100 patients, whereas the AVM arm yielded 66 completed cases per 100 patients at a cost of $4159. The EMR arm was “dominated,” because using a mix of UC (eg, for 40% of patients) and AVM (eg, for 60% of patients) would be both less expensive and more effective; the EMR strategy is therefore a suboptimal choice. The AVM intervention had an ICER of $47 per additional completed case; the Pharmacy arm had an ICER of $64 per additional completed case. At a lower level of willingness to pay for an additional completed case (eg, $40), UC was the intervention with the highest probability of being cost-effective; as willingness to pay increased, however, other interventions became more likely to be cost-effective.
Table 4 shows an examination of the cost-effectiveness of finding abnormal cases. There was a similar pattern of increasing effectiveness at higher cost, and the EMR intervention method was again dominated. As with completed cases, at a lower level of willingness to pay for finding an abnormal case (eg, $400), UC had the highest probability of being cost-effective, but above $600 AVM and Pharmacy became more likely to be cost-effective. The sensitivity analysis based on low and high estimates of time used in ordering, reviewing, and follow-up of normal and abnormal tests showed no difference in ICER ranking and nearly identical results overall (eg, ICERs of $44 and $50 for the AVM arm vs $47 for the base-case analysis). The sensitivity analysis on the cost of contact for patients in the EMR arm revealed that, even if the cost of patient contact was reduced to zero (eg, the marginal cost might be close to zero if all contact took place during visits), the EMR arm would never be the optimal strategy.
Figure 2 shows the cost-effectiveness acceptability curves. For example, at a willingness-to-pay level for a completed case between $47 and $64, AVM had the highest probability of being cost-effective, but the Pharmacy intervention had the highest probability of being cost-effective at greater levels. There is considerable uncertainty in the estimates of cost-effectiveness, particularly for abnormal case detection. For example, at levels of $800 and higher AVM had a probability of being cost-effective of about 0.32, while Pharmacy was cost-effective with a probability of about 0.50.
We found that direct-to-clinician messaging (EMR) to prompt therapeutic baseline monitoring is probably not an efficient use of resources. But our analysis indicates that depending on a decision maker’s willingness to pay for a completed case, both AVM and Pharmacy can be efficient choices. Deciding which choice is optimal requires additional information about the harms prevented from completing baseline monitoring; our analysis did not include these costs because of limitations in study budget and design. However, it is worth noting that compared with the ICER for the AVM arm, the ICER for the Pharmacy arm was almost 40% greater for the “cases completed” analysis, but was less than 10% greater for the “abnormal cases detected” analysis, suggesting that the relative efficiency of the Pharmacy intervention may increase when considering harms.
As noted, our findings are most useful for an efficiency comparison between the alternative methods in the analysis. The findings give less guidance to help with the decision about what (if any) program should be adopted; a cost-benefit or cost-utility analysis would be needed to answer that question. But an informal analysis can be made using the data in Table 3 and making assumptions about the cost and probability of events that might be avoided by a monitoring program. For example, if one assumes that the cost of an event avoided would be $10,000 (ie, short inpatient stay) and the probability of the event for those with an abnormal test is 5%, the willingness-to-pay amount to avoid the event would be at least $500 (ie, $10,000 × 5%); this example does not incorporate potential losses in health-related quality of life. Empirical work on the probability of abnormal tests leading to adverse events, the cost of those events, and the health benefits gained is necessary to further inform decision makers’ choices in this area. Reports of simulation modeling on the economics of preventing medication errors using information technology and pharmacy personnel indicate that health-related quality-of-life benefits (from avoiding medican tion errors) are a critical parameter; their inclusion reversed the estimates from negative to positive net benefits.6
Our intervention did not differentiate patients with regard to potential for poor outcomes; rather it was applied to all patients with a missing laboratory test at baseline. Another approach would be to administer the intervention to patients who are predicted to be at particularly high risk of poor outcomes. By targeting high-risk patients, the proportion of patients with a greater likelihood of adverse events (given an abnormal test) would increase. Such an approach might increase the efficiency of the strategies but would require identification of those patients, perhaps by using a risk score approach.7
The clinical trial on which we based our economic analysis showed that each of the 3 interventions was superior to UC in improving monitoring at 25 days; patients in the EMR arm were 2.5 times more likely to have recommended monitoring.2 But when we included costs (ie, increase in clinician time associated with contacting the patient and follow-up), the EMR arm was shown to be relatively inefficient. Off-loading these tasks to others enhances the overall effects and lowers costs. We included the effort required by clinicians to review laboratory test results in our cost estimates. Another approach would have been to assume that clinicians can absorb the cost of reviewing the extra tests into their slack time. But as suggested by best practice guidelines in economic evaluation,4 we estimated the “opportunity cost” of providing the interventions. The concept of opportunity cost requires measurement of all resources necessary to produce an intervention, even if those resources may be viewed as free (eg, the effort required to review laboratory tests); this concept recognizes that doing one thing means that another thing cannot be done (because that opportunity is lost).
We found relatively high levels of certainty associated with willingness to pay for an additional completed case. At a willingness-to-pay level of less than $40 the probability of UC being the most cost-effective was about 0.9, and at a willingness-to-pay level greater than about $85, the Pharmacy intervention was the most cost-effective with a probability of 0.9. With intermediate levels of willingness to pay for a completed case (ie, $50-$65) AVM was the most cost-effective, but with less certainty (probability about 0.6). On the other hand, we found considerable uncertainty with the abnormal case detection analysis. No single intervention showed costeffectiveness with probability greater than 0.5 at a willingness to pay between $450 and $850. The uncertainty estimates are loosely analogous to a wide confidence interval (see van Hout et al5 for further details) and point out the lack of precision in our estimates. More precise estimates require further data collection.8 Our results were not sensitive to the estimates of time required to order, review, and follow-up on tests, suggesting that additional data collection should focus on other areas like estimating the potential cost-offsets from avoiding poor outcomes.
Some items we included in the performance of these interventions, for example, the individual chart review of each patient prior to intervention to ensure eligibility (ie, we did not rely entirely on electronic data), may not have been strictly necessary. Excluding these potentially protocol-driven costs may change the relative cost-effectiveness of the interventions. We felt that this was an important feature to include, however, because the effectiveness of the intervention was based on that level of rigor, and because the HMO would likely not be willing to implement any of the programs without that double-check. Additionally, we included patient contact costs in the EMR arm for all patients, but this may have overcounted costs in that arm to the extent that the rate of contact was lower. But our sensitivity analysis on that variable indicated that even if the contact costs were reduced to zero the EMR arm would never have been the optimal strategy.
The clinical trial allowed us to evaluate strategies that would off-load clinicians’ work (ie, AVM and Pharmacy arms) as compared with more intrusive strategies (the EMR arm). Because we did not set out to balance resources available in the intervention arms, we cannot determine whether the effect and cost difference we observed was solely based on the intervention delivery mechanism, the intensity of the intervention, or other factors. Also, although a laboratorymonitoring protocol was available and communicated to all clinical staff, clinicians in the EMR arm may have been more likely to exercise clinical judgment on a case-by-case basis and thus not intervene for every patient. Our analysis assumes that all patients who received interventions were at the same risk of poor outcomes. Because we did not measure outcomes in our analysis, to the extent that the clinical judgment exercised by EMR clinicians improved the likelihood of interventions for high-risk patients (thus improving their group’s potential to benefit), our analysis underestimated the true effect in the EMR arm.
Our study has limitations. The clinical trial on which it was based was undertaken in a single HMO, so the effect of the interventions (especially EMR) may not be completely generalizable to other practice settings. Additionally, the parent study was not designed to determine whether the interventions led to improved detection of abnormal baseline laboratory test results or significant changes in patient care and outcomes. Our findings in this cost-effectiveness analysis help to put the clinical trial results in a decision-making context. We think a hybrid approach to the intervention may be a particularly useful approach. For example, one that includes AVM followed by Pharmacy outreach for nonresponders may prove to be efficient.
The efficient use of health information technology is of great interest to clinicians and healthcare payers, and our study is one of the first to estimate the relative cost-effectiveness of these technologies. Using automated telephone technology has been shown to have promise in many clinical areas, for example post-inpatient discharge,9 enhancing immunization rates in children,10 and asthma care,11 but the cost-effectiveness of these systems is less clear. The use of automated medical records has been the subject of several systematic reviews.12-14 Like automated telephone technology, the EMR systems have promise, but the economic benefits are less certain. Our study shows that providing clinical decision support directly to clinicians using EMR technology is more costly and less effective than automated telephone calls for increasing laboratory monitoring. This finding is particularly of interest given (1) that we did not include costs for programming of EMR messages because existing EMR functionality was used and (2) the “sunk cost” of the EMR was not included in our analysis because from the HMO’s perspective the EMR technology was already owned. But it should be noted that the costs of chart review and tracking activities would likely have been substantially higher in a health system that lacked an EMR. Our study may serve as a cautionary example of when EMR-based intervention methods are not the most efficient choice.
Depending on one’s willingness to pay, both AVM and a pharmacy-led intervention may be efficient methods to enhance the level of laboratory monitoring of medications at initiation. Our analysis gives guidance, but further empirical work to estimate the cost and probability of adverse events associated with abnormal laboratory tests is important to further inform funding decisions.
Author Affiliations: From the Center for Health Research (DHS, ACF, NAP, XY, MMR), Kaiser Permanente and Northwest Permanente (ACF), Portland, OR; the Institute for Health Research (MAR, DJM), Kaiser Permanente Colorado, Denver; Harvard Medical School (SRS, SBS) and Harvard Pilgrim Health Care (SRS, SBS), Boston, MA. Funding Source: The study was funded by Kaiser Permanente’s Garfield Memorial Fund.
Author Disclosure: The authors (DHS, ACF, NAP, XY, MMR, MAR, DJM, SRS, SBS) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (DHS, ACF, NAP, MMR, MAR, DJM, SRS, SBS); acquisition of data (DHS, ACF, XY, MMR); analysis and interpretation of data (DHS, ACF, NAP, XY, MMR, MAR, DJM, SRS, SBS); drafting of the manuscript (DHS, MMR); critical revision of the manuscript for important intellectual content (DHS, ACF, NAP, MAR, DJM, SRS, SBS); statistical analysis (DHS, NAP, XY, SBS); provision of study materials or patients (DHS); obtaining funding (DHS, ACF); administrative, technical, or logistic support (DHS, ACF, MMR); and supervision (DHS, ACF, SBS).
Address correspondence to: David H. Smith, RPh, PhD, Center for Health Research, Kaiser Permanente, 3800 N Interstate Ave, Portland, OR 97227. E-mail: email@example.com.
1. Raebel MA, Lyons EE, Chester EA, et al. Improving laboratory monitoring at initiation of drug therapy in ambulatory care: a randomized trial. Arch Intern Med. 2005;165(20):2395-2401.
2. Feldstein AC, Smith DH, Perrin N, et al. Improved therapeutic monitoring with several interventions: a randomized trial. Arch Intern Med. 2006;166(17):1848-1854.
3. National Committee for Quality Assurance. HEDIS® 2007 Summary Table of Measures and Product Lines. http://web.ncqa.org/Portals/0/HEDISQM/Archives/2007/MeasuresList.pdf. Accessed February 19, 2009.
4. Gold M, Siegel J, Russel L, Weinstein M. Cost-effectiveness in Health and Medicine. New York, NY: Oxford University Press; 1996.
5. van Hout BA, Al MJ, Gordon GS, Rutten FF. Costs, effects and C/Eratios alongside a clinical trial. Health Econ. 1994;3(5):309-319.
6. Karnon J, McIntosh A, Dean J, et al. Modelling the expected net benefits of interventions to reduce the burden of medication errors. J Health Serv Res Policy. 2008;13(2):85-91.
7. Lumley T, Kronmal RA, Cushman M, Manolio TA, Goldstein S. A stroke prediction score in the elderly: validation and Web-based application. J Clin Epidemiol. 2002;55(2):129-136.
8. Claxton K. The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies. J Health Econ. 1999;8(3):341-364.
9. Forster AJ, van Walraven C. Using an interactive voice response system to improve patient safety following hospital discharge. J Eval Clin Pract. 2007;13(3):346-351.
10. Lieu TA, Capra AM, Makol J, Black SB, Shinefield HR. Effectiveness and cost-effectiveness of letters, automated telephone messages, or both for underimmunized children in a health maintenance organization. Pediatrics. 1998;101(4):E3.
11. Vollmer WM, Kirshner M, Peters D, et al. Use and impact of an automated telephone outreach system for asthma in a managed care setting. Am J Manag Care. 2006;12(12):725-733.
12. Shekelle PG, Morton SC, Keeler EB. Costs and benefits of health information technology. Evid Rep Technol Assess (Full Rep). 2006;(132):1-71.
13. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742-752.
14. Uslu AM, Stausberg J. Value of the electronic patient record: an analysis of the literature. J Biomed Inform. 2008;41(4):675-682. Epub 2008 Feb 15.