Impact of Weekly Feedback on Test Ordering Patterns

Christine Minerowicz, MD; Nicole Abel, MD; Krystal Hunter, MBA; Kathryn C. Behling, MD, PhD; Elizabeth Cerceo, MD; and Charlene Bierl, MD, PhD

Laboratory costs represent approximately 4% of the overall $2.5 trillion healthcare expenditure per year in the United States,1 and test results influence upwards of 70% of subsequent clinical decision making and patient management.2 The number and complexity of tests available has risen exponentially along with costs,3 which has forced laboratories and clinicians to take a closer look at expenditures.4 These concerns are reflected in national efforts to contain expenditures, exemplified by the Choosing Wisely campaign led by the American Board of Internal Medicine Foundation, which promotes the more effective use of healthcare by identifying select commonly used tests or procedures that should receive increased scrutiny prior to being performed.
 
Estimates of unwarranted testing are highly variable, ranging from 10% to 50%, with a recent study estimating this type of unnecessary testing at 20.6%.5,6 In an attempt to help curb rising costs, laboratories have employed a variety of methods to reduce unwarranted testing. Notable approaches include reflex algorithms,7 guidelines,8-12 computer alerts,13,14 clinical justification letters for expensive tests,15 interviews and questionnaires,16 physician education,11,17 weekly announcements regarding costs of laboratory services,18 computer display of cost at the time of ordering,19-21 physician feedback,22,23 and a variety of multifaceted approaches.11,12,24 Although many interventions have been shown to significantly reduce laboratory testing and charges during their intervention periods,8,17,19,24-26 others failed to yield a significant impact.11,21
 
Aside from its adverse economic impact, unwarranted laboratory testing also has the potential to negatively impact patient outcomes. Studies show that polyphlebotomy leads to reduced patient satisfaction27 and can cause hospital-acquired anemia, which may lead to additional costly procedures such as blood transfusions.28,29 Unwarranted testing also increases the likelihood of spurious results that can lead to additional testing and potential medical errors.30
 
We performed an ongoing, 2-part performance improvement initiative, which was piloted with the internal medicine residents at Cooper University Hospital in Camden, New Jersey, beginning in March 2012. An introductory educational session was followed by weekly feedback on ordering patterns including individual ordering patterns relative to those of peers in the same post graduate year, as well as general trends in the ordering patterns of all internal medicine residents within the study. The goal of the performance improvement initiative was to reduce unnecessary testing while allowing the ordering physician to decide which tests could be spared. This study is a retrospective evaluation of the impact of education and weekly feedback on the laboratory testing ordering patterns of resident physicians.
 
METHODS
Setting and Intervention
The study was conducted in an academic, 493-bed tertiary care hospital, whose clinical laboratory performs an average of 2 million tests annually. Starting in March 2012, the laboratory began a performance improvement initiative to reduce unnecessary testing. Raw data, including every inpatient laboratory test performed, were extracted from Sunquest Laboratory Manager and merged with a database linking physicians with their respective departments and post graduate year (PGY). The database was filtered to include only tests performed in the core laboratories (ie, microbiology, chemistry, coagulation, hematology, and reference laboratory testing). Internal medicine residents were filtered from this database and assigned an anonymous 4-digit identifier number (de-identifier), only known to the laboratory director and the resident. Data were pulled each week.
 
In March 2012, the residents received the “intervention,” an hour-long presentation regarding effective laboratory utilization emphasizing the potential costs and harms of overutilization. Brief 5-minute refresher sessions were held in August 2012 and August 2013, and regular reminders and discussions of their overall progress were included in monthly resident meetings. Each week, the residents received 2 graphs from the reporting week via group e-mail. The first graph showed the total number of tests ordered per de-identifier and were grouped according to PGY level so the residents could visualize how they compared with peers with the same level of training and responsibilities. Specific rotations and laboratory sections were not disclosed to protect anonymity (Figure 1). Some variability was expected with rotation assignments; the residents were encouraged to look at their performance across multiple weeks. Although a high number of orders may have been expected on a busy critical care week, no single resident would have been expected to remain at an extreme across multiple rotations. The second graph depicted the tests ordered by the internal medicine residents as a whole since July 2011 (Figure 2). The data did not include tests ordered as point-of-care or through the blood bank and anatomic pathology sections of the laboratory.
 
To further evaluate the success of this effort, we compared two 26-week time periods: September 2, 2011, through February 24, 2012 (pre-intervention), and August 31, 2012, through February 22, 2013 (post intervention). There were 56 ordering residents in the pre-intervention period and 58 ordering residents in the post intervention period.
 
Outcomes, Measurements, and Statistical Analysis
Outcomes measured included the total number of tests performed, the laboratory sections from which tests were ordered, and the total charges billed. The data were then reassessed, controlling for patient-days. For all outcomes, we examined changes between the pre- and post intervention periods as a whole. Statistical significance was calculated using an independent t test.
 
RESULTS
During the pre-intervention period, 116,657 tests were ordered; 92,183 tests were ordered during the post intervention period. This is a net decrease of 24,474 tests ordered (21% of the pre-intervention total) during the post intervention period, with an average decrease of 941 tests per week. When the test ordering patterns were examined by laboratory section and specific tests, we found the greatest decrease in test ordering occurred in chemistry, followed by hematology, coagulation, and “all other tests” (ie, microbiology, immunology, send-outs, flow cytometry, and urinalysis) (Table). In chemistry, there was a net decrease of 14,087 tests (58%), followed by reductions of 5543 (23%), 3741 (15%), and 1110 (5%) tests in hematology, coagulation, and all other tests, respectively. Large decreases were seen in test orders for magnesium, phosphorus, basic metabolic panels, complete blood count (CBC) with automated differential, CBC with manual differential, and prothrombin time/international normalized ratio, followed by a collective decrease in the remaining 466 test types (Table). There were no significant increases in tests ordered with the exception of the CBC without differential, which saw a 3% increase (705 tests) in the post intervention period. The net decrease in tests corresponded to a $1.3 million reduction in overall charges, with the greatest reduction in the chemistry section ($980,631; 76%).
 
Because patient census has the potential to affect the aggregate amount of tests ordered, we examined the number of overall hospital days present in the pre- versus post intervention periods. There was an increase in hospital days in the post intervention period (68,180 days) compared with the pre-intervention time period (62,220 days), suggesting that the decrease seen in test ordering was not due to a decrease in overall patient census. The number of hospital days specific to the medicine services showed proportional changes to those seen in the hospital overall. The overall laboratory testing volumes for the rest of the hospital increased during the post intervention time period (data not shown).
 
DISCUSSION
Our findings demonstrate that a reduction of more than 24,000 tests with an associated reduction of $1.3 million in charges over a 26-week period was achievable following a single intervention point and weekly feedback to only a fraction of the ordering physicians at our institution. By not targeting any specific test or behavior, we were able to significantly reduce high-volume inexpensive tests (eg, magnesium, phosphorus, basic metabolic panels) and trend toward a reduction of low-volume expensive reference testing (included in “all other testing”) without altering our existing information systems or disclosing fees. The time investment on behalf of the laboratory director to provide the weekly feedback report was remarkably small in comparison with the reductions observed.
 
Achieving meaningful and sustainable reductions in laboratory testing involves both the laboratory professionals performing the testing and the clinicians who order the tests. The opposing forces of the financial impact of testing and the responsibility for excellent patient care must be reconciled to support an effective and efficient healthcare system. Many healthcare systems have attempted to achieve this goal using a variety of strategies, with varying degrees of success. Some have found that the test-ordering interface can have a significant influence on ordering patterns.10,31 Of the different interventions described in the literature,4,12,14,31 many involve altering the test menu to make test ordering more difficult for the clinician, and although these changes may be well-intentioned, they can be cumbersome and frustrating. Notably, when these interventions involve removing esoteric tests from the test menu or eliminating test ordering panels, there is no discussion as to whether the clinician finds it helpful or frustrating to order each electrolyte on a basic metabolic panel individually or to call the laboratory each time they want to order a test that has disappeared from the test menu. A minority of the interventions in the literature engages the clinician as a meaningful counterpart in tackling a rather complex problem.24 A major strength to our approach was that it fostered a collegial, nonantagonistic relationship between the laboratory director and the internal medicine residents. Test orders were substantially reduced while maintaining each ordering resident’s anonymity and autonomy to diagnose and manage their patient as necessary.
 
Although we were able to calculate the reduction in charges achieved by our intervention, we cannot directly translate the fewer tests performed into dollars saved for the laboratory or reduced healthcare costs for capitated inpatients. For example, more than 4000 fewer magnesium levels were ordered in the post intervention group. While we did not track order sets, it is likely that most of the magnesium levels ordered were in conjunction with other tests. The economics of performing or not performing a magnesium level is negligible when a tube of blood is already drawn for another test, such as an electrolyte panel, because laboratory overhead costs are fixed (ie, there is no additional material required, no additional phlebotomist or laboratory technician time needed) and the reagents to perform the test are inexpensive. In contrast, the reduction in nearly 3000 manual differentials performed by laboratory technicians and occasional pathologist review has the potential for far greater impact on the daily operations of the laboratory in terms of personnel allocation.
 
Caution must be exercised before extrapolating our data hospitalwide. The same level of impact may not be seen in medical services that do not routinely manage critically ill patients or require extensive diagnostic evaluation with the same frequency as that performed by our internal medicine residents. Additionally, attending physicians may be too entrenched in their ordering habits or may have already attenuated their test volume to a fairly efficient level due to practice experience. Therefore, they may not achieve the same level of success using our intervention.
 
Finally, the downstream effects of our study remain to be seen, as we continue to provide the weekly feedback reports to the internal medicine residents. Reduced testing may lead some to fear delayed or missed diagnoses. However, Wang et al demonstrated that reduced testing (range = 7% to 40% of all chemistry tests) in the coronary care unit did not alter length of stay, readmission to the intensive care unit, hospital morbidity, hospital mortality, or ventilator days.9 We do not know if reducing unnecessary testing will become an integral part of the residents’ ordering behaviors as they move beyond their training or if the mindfulness will fade once the weekly feedback reports disappear.
 
CONCLUSIONS
We conclude that providing internal medicine residents with weekly feedback regarding test-ordering patterns resulted in a significant reduction in the total number of laboratory tests ordered over time. These data suggest that there may be utility in expanding the use of these weekly feedback reports to other specialties and to nonresident physicians, as well as in expanding the reports to include other areas of patient care such as transfusion medicine. 
Print | AJMC Printing...