• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Electronic Health Record Feedback to Improve Antibiotic Prescribing for Acute Respiratory Infections

Publication
Article
Supplements and Featured PublicationsSpecial Issue: Health Information Technology — Guest Editors: Sachin H. Jain, MD, MBA; and David B
Volume 16

An electronic health record–based feedback program, the Acute Respiratory Infection Quality Dashboard, did not lead to an overall change in antibiotic prescribing in primary care.

Objective:

To examine whether the Acute Respiratory Infection (ARI) Quality Dashboard, an electronic health record (EHR)—based feedback system, changed antibiotic prescribing.

Study Design:

Cluster randomized, controlled trial.

Methods:

We randomly assigned 27 primary care practices to receive the ARI Quality Dashboard or usual care. The primary outcome was the intent-to-intervene antibiotic prescribing rate for ARI visits. We also compared antibiotic prescribing between ARI Quality Dashboard users and nonusers.

Results:

During the 9-month intervention, there was no difference between intervention and control practices in antibiotic prescribing for all ARI visits (47% vs 47%; P = .87), antibiotic-appropriate ARI visits (65% vs 64%; P = .68), or non—antibiotic-appropriate ARI visits (38% vs 40%; P = .70). Among the 258 intervention clinicians, 72 (28%) used the ARI Quality Dashboard at least once. These clinicians had a lower overall ARI antibiotic prescribing rate (42% vs 50% for nonusers; P = .02). This difference was due to less antibiotic prescribing for non—antibiotic-appropriate ARIs (32% vs 43%; P = .004), including nonstreptococcal pharyngitis (31% vs 41%; P = .01) and nonspecific upper respiratory infections (19% vs 34%; P = .01).

Conclusions:

The ARI Quality Dashboard was not associated with an overall change in antibiotic prescribing for ARIs, although when used, it was associated with improved antibiotic prescribing. EHR-based quality reporting, as part of “meaningful use,” may not improve care in the absence of other changes to primary care practice.

(Am J Manag Care. 2010;16(12 Spec No.):e311-e319)

Quality reporting is one of the criteria for the "meaningful use" of electronic health records. However, introduction of a quality report about antibiotic prescribing for acute respiratory infections, the Acute Respiratory Infection Quality Dashboard, was not associated with improved quality of care.

  • Quality reporting, by itself, is frequently insufficient to improve the quality of care.

  • To be effective, quality reporting likely needs to be coupled with other interventions like clinician detailing, clinical decision support, patient education, or financial incentives.

  • Meaningful use criteria should be evaluated for effectiveness as they are implemented.

Electronic health records (EHRs) have been touted as a way to improve the quality of healthcare in the United States.1,2 The Health Information Technology for Economic and Clinical Health (HITECH) Act, which authorizes unprecedented incentives for EHR adoption, requires eligible physicians to engage in “meaningful use” of EHRs. One of the meaningful use “menu” criteria is the ability to “generate lists of patients by specific conditions” for, among other things, quality improvement.3 Generating such lists may help clinicians understand patterns of care and improve the quality of care, but the effectiveness of this capability is largely untested.

Acute respiratory infections (ARIs) are the most common symptomatic reason for ambulatory visits and account for about half of antibiotic prescriptions in the United States.4,5 Despite guidelines generally discouraging antibiotic prescribing for ARIs, especially for non—antibiotic- appropriate ARIs, about half of antibiotic prescriptions for ARIs are inappropriate.6,7 Inappropriate antibiotic prescribing is clinically ineffective, increases medical costs, increases the prevalence of antibiotic-resistant bacteria, and unnecessarily exposes patients to adverse drug events.8 Most interventions to decrease inappropriate antibiotic prescribing for ARIs have been, at best, modestly effective.9

To examine whether providing EHR-based feedback improves the quality of care and reduces inappropriate antibiotic prescribing for ARIs, we developed the ARI Quality Dashboard, an EHR-integrated, clinician-level report that details antibiotic prescribing for ARIs. We evaluated the effectiveness of the ARI Quality Dashboard in a cluster randomized, controlled clinical trial in primary care practices.

METHODS

Partners HealthCare System is an integrated regional healthcare delivery network in eastern Massachusetts. The main EHR used in Partners HealthCare ambulatory clinics is the Longitudinal Medical Record (LMR). The LMR is an internally developed, full-featured, Certification Commission for Healthcare Information Technology—approved EHR (2006) including primary care and subspecialty notes, problem lists, medication lists, coded allergies, and laboratory test and radiographic study results. The practices in this study began using the LMR between 1999 and 2003.

ARI Quality Dashboard

Figure 1

The ARI Quality Dashboard contains views of clinicians’ antibiotic prescribing and billing practices for ARI visits (). Each view displays a clinician’s performance against his or her clinic peers and against national benchmarks. The ARI Quality Dashboard includes the proportion of ARI visits at which antibiotics were prescribed; the proportion of individual ARI diagnoses (eg, pneumonia, sinusitis, acute bronchitis) at which antibiotics were prescribed; the proportion of broaderspectrum antibiotic prescribing; the distribution of ARI visits by evaluation and management billing codes (eg, level 1 through 5); and individual patient visit details, including date of service, antibiotic prescribed, antibiotic class, date of prescription, diagnosis codes, and evaluation and management billing codes. We designed the ARI Quality Dashboard based on the recommendations of the Centers for Disease Control and Prevention and the American College of Physicians.10

We included billing data to provide a sense of a financial incentive to clinicians. By showing evaluation and management billing codes, clinicians could learn whether they were under-billing for ARI visits, which generally make up about 10% of all visits, compared with their peers. However, clinicians had no direct financial incentive to view the ARIQuality Dashboard. Clinicians’ salaries were overwhelmingly productivity based. Pay-for-performance incentives were in place, which accounted for about 5% of clinicians’ salary, but none were related to antibiotic prescribing.

Clinicians accessed the ARI Quality Dashboard from the EHR Reports Central area, which contained about 10 other reports about preventive and chronic disease management. A clinician could “drill down” to any patient’s medical record directly from the ARI Quality Dashboard to review patient details and export the report for additional follow-up or analysis. We used ASP.NET technology to build the ARI Quality Dashboard. Reports were constructed and viewed using Crystal Reports XI, with data from the Partners HealthCare Quality Data Warehouse, which aggregates data from various sources. The ARI Quality Dashboard displayed visit and prescribing data for the previous year and was automatically updated monthly.

We previously piloted the ARI Quality Dashboard and found that pilot users accessed the ARI Quality Dashboard and found it useful for understanding their antimicrobial prescribing patterns.11-13 Pilot users also found it convenient to be able to validate the ARI Quality Dashboard reports with primary data from the EHR by drilling down to individual patient charts.

Practice Matching, Randomization, and Intervention Implementation

We randomly assigned 27 primary care clinics associated with Partners HealthCare that use the LMR to receive the ARI Quality Dashboard or to usual care. We matched clinics on the basis of size. Matched pairs were randomized, with 1 practice from each pair assigned to receive the intervention and the other assigned to usual care. The Human Research Committee of Partners HealthCare approved the study protocol.

The intervention period was from November 27, 2006, to August 31, 2007. Throughout the intervention period, we sent monthly e-mails reminding clinicians about the ARI Quality Dashboard. Beyond e-mails, there was no coordinated effort to educate the EHR support team or a formal release of the ARI Quality Dashboard functionality to EHR users. The research team provided application and user support for the ARI Quality Dashboard.

Outcomes

The primary outcome was the antibiotic prescribing rate for ARIs, based on electronic prescribing using the EHR, in an intent-to-intervene analysis, adjusted for clustering by practice. We considered ARIs in aggregate to avoid the potential problem of “diagnosis shifting” in which clinicians might prescribe antibiotics, but select more antibiotic-appropriate diagnoses to mask inappropriate prescribing.14 Secondary outcomes included the antibiotic prescribing rate for antibiotic-appropriate diagnoses and non—antibiotic-appropriate diagnoses (see Data Collection and Analysis, below) and for individual ARI diagnoses.

We also performed an “as-used” analysis by comparing antibiotic prescribing between intervention clinicians who used the ARI Quality Dashboard at least once with intervention clinicians who did not use the ARI Quality Dashboard, adjusted for clustering by clinician. Because of the practice-level randomization, we excluded the control clinicians

from the as-used analysis.

Data Collection and Analysis

We identified ARI visits using administrative data coded using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes. We considered antibiotic-appropriate ARI visits those with an ICD-9-CM code for pneumonia (481-486), streptococcal pharyngitis (034.0), sinusitis (461 and 473), and otitis media (381 and 382). We considered non— antibiotic-appropriate ARI visits those with an ICD-9-CM code for nonstreptococcal pharyngitis (462 and 463), influenza (487), acute bronchitis (466 and 490), and nonspecific upper respiratory infection (460, 464, and 465). These administrative data have a sensitivity of 98%, a specificity of 96%, and a positive predictive value of 96% for diagnosing ARIs compared with medical record review.15 If a patient had multiple ARI diagnoses at a visit, we counted that visit only once, giving preference to more antibiotic-appropriate diagnoses.

We defined antibiotic use as the EHR prescription of an orally administered antibiotic agent within 3 days of an ARI visit. We previously found that the sensitivity of EHR antibiotic prescribing (ie, the proportion of all antibiotic prescriptions that were generated using the EHR) increased rapidly from 2000 to 2003.15 During the intervention period, it was the policy of study practices that clinicians write all prescriptions using the EHR.

We considered clinicians who saw patients in both intervention and control practices (7% of 573 clinicians) to be intervention clinicians and assigned them to the intervention practices at which they had the most visits. These clinicians had an ARI Quality Dashboard use rate similar to that of clinicians overall. A secondary analysis excluding clinicians who saw patients in both the intervention and control practicesdid not change the results substantively. We removed data for 3 physicians who were involved in the design or implementation of the ARI Quality Dashboard.

We compared characteristics between the control and intervention practices, clinicians, and patients. In the intervention practices, we compared clinicians who used the ARI Quality Dashboard at least once with clinicians who never used the ARI Quality Dashboard.

Statistical Analysis and Power Calculation

We used standard descriptive statistics to compare clinicians and patients. To account for the level of randomization, we adjusted statistical analyses—the c2 test for categorical variables and the t test for continuous variables—for clustering by practice using PROC GENMOD in SAS version 9.1 (SAS Institute, Inc, Cary, NC).16 For the comparison in antibiotic prescribing between intervention clinicians who did and did not use the ARI Quality Dashboard, we adjusted for clustering by clinician. Two-sided P values less than .05 were considered to be significant. Assuming a baseline antibiotic prescribing rate for ARIs of 35%, a of .05, and an intraclass correlation coefficient of 0.10, 1798 visits in each group were required to have 80% power to detect a 7% absolute reduction in the antibiotic prescribing rate, a difference we thought would be clinically significant.15

RESULTS

Practice, Clinician, and Patient Characteristics

Figure 2

Table 1

Table 2

Practices ranged in size from 4 to 36 clinicians (mean = 18 [SD = 10]). During the 9-month intervention period, 136,633 patients made 296,548 primary care visits, including 18,488 ARI visits, to 573 clinicians (). There was no significant difference between intervention and control practices in number of years using the EHR, mean visits per year, the baseline antibiotic prescribing rate, or the baseline antibiotic prescribing rate for ARIs (data not shown). There were no significant differences in clinician or patient characteristics between intervention and control practices ( and ).

ARI Quality Dashboard Use

Of the 258 clinicians in the intervention group, 72 (28%) used the ARI Quality Dashboard at least once. Of these clinicians, 47 used the ARI Quality Dashboard only once, 9 used it twice, 6 used it 3 times, and 10 used it 4 or more times (maximum, 7 times). Compared with clinicians who did not use the ARI Quality Dashboard, clinicians who used the ARI Quality Dashboard were older, were more likely to be staff physicians, and had more visits and more ARI visits during the intervention period (Table 1). Between practices, ARI Quality Dashboard use ranged from 0% (0 of 9 clinicians) to 67% (4 of 6 clinicians). The mean within-practice clinician use was 30% (SD = 20%).

Antibiotic Prescribing

Table 3

In the intent-to-intervene analysis, there was no significant difference in antibiotic prescribing for the primary outcome, all ARIs combined (odds ratio [OR] = 0.97; 95% confidence interval [CI] = 0.7, 1.4; P = .87), antibiotic-appropriate ARIs, non—antibiotic-appropriate ARIs, or any individual ARI ().

Within the intervention practices, clinicians who used the ARI Quality Dashboard were less likely to prescribe antibiotics for all ARIs (42% vs 50%; OR = 0.72; 95% CI = 0.54, 0.96; P = .02) and non—antibiotic-appropriate ARIs (32% vs 43%; OR = 0.63; 95% CI = 0.45, 0.86; P = .004). There was no significant difference in antibiotic prescribing for antibiotic-appropriate ARIs between ARI Quality Dashboard users and nonusers (63% vs 68%; OR = 0.78; 95% CI = 0.53 to 1.15; P = .21). The reduced prescribing for non–antibiotic-appropriate diagnoses was due to reduced antibiotic prescribing to patients with nonstreptococcal pharyngitis (31% vs 41%; OR = 0.67; 95% CI = 0.49, 0.92; P = .01) and nonspecific upper respiratory infections (19% vs 34%; OR = 0.46; 95% CI = 0.26, 0.82; P = .01). Clinicians who used the ARI Quality Dashboard were marginally less likely to prescribe antibiotics for patients with sinusitis (69% vs 78%; OR = 0.63; 95% CI = 0.39, 1.0; P = .05).

DISCUSSION

In a cluster randomized controlled trial, we found the introduction of an EHR-based quality report, the ARI Quality Dashboard, did not result in improved antibiotic prescribing. We also found that antibiotic prescribing rates, even for non—antibiotic-appropriate diagnoses, were generally high.

There may be several explanations for our findings. First, and most simply, the ARI Quality Dashboard may not be effective. Even broad use of the ARI Quality Dashboard may notbe enough to overcome clinicians’ incorrect beliefs that antibiotics are necessary to treat ARIs, desired by most patients with ARIs, or necessary to maintain patient satisfaction.17-20

Second, making aggregate prescribing information available may be necessary to change clinical practice, but alone, this information is insufficient.22 EHR-integrated feedback may be effective only when coupled with other interventions like formal clinician audit, clinician education, financial or regulatory incentives, patient education, formulary restrictions,or other, multidimensional interventions.9,22,23 Although we included billing data, clinicians had no incentive to use the ARI Quality Dashboard. On the contrary, clinicians continued to have a financial incentive to prescribe antibiotics, as evaluation and management codes reward giving patients prescription medications.24 Similar to clinicians, patients respond to nonclinical, financial factors when it comes to antibiotic use for ARIs: patients with better medication insurance coverage use antibiotics more for ARIs, particularly more expensive, broader-spectrum antibiotics.25

Third, feedback to improve care may be a particular challenge for acute conditions, compared with chronic conditions for which clinicians can make changes before, during, or after clinic visit.26,27 Feedback about chronic conditions can be immediately actionable, but the ARI Quality Dashboard relies on memory of past feedback to affect future antibiotic prescribing. Acute conditions may require more interruptive clinical decision support at the time of decision making, although clinical decision support systems for ARIs in ambulatory care have had mixed results.28,29

Fourth, 9 months may have been too short a time for users to try the intervention and to change practice. Staff physicians who had busier practices were more likely to have used the ARI Quality Dashboard, perhaps because they have more opportunities to see and use novel functionality. Finally, the feedback system may be effective, but use and effectiveness may require more coordinated, intensive introduction, implementation, training, and support. Our introduction and implementation of the ARI Quality Dashboard were relatively weak, with only monthly emails, no hands-on training of users, and no organized use of EHR support personnel. Future qualitative investigations will be needed to understand precisely why clinicians did not use the ARI Quality Dashboard and why it was not effective.

The as-used analysis supports this conclusion that the ARI Quality Dashboard was effective, but the implementation was insufficient to change practice broadly. To be clear, the as-used analysis also is consistent with either the conclusion that ARI Quality Dashboard users were different from nonusers at baseline or that the introduction of the ARI Quality Dashboard preferentially “selected” clinicians who were already amenable to change.30 Unfortunately, we do not have data about antibiotic prescribing practices of ARI Quality Dashboard users and nonusers before the intervention period that would allow us to differentiate between these conclusions.

Beyond the possible reasons for the lack of effectiveness of the ARI Quality Dashboard, our analysis has limitations. To identify ARI visits, we relied on billing codes, which remain in many EHRs the only practical way of identifying visit-based diagnoses. In the as-used analysis, we saw no evidence of diagnosis shifting that would be indicated by an increase in the proportion of antibiotic-appropriate visits and no change in antibiotic prescribing for all ARIs combined. To measure the outcomes, we relied on EHR antibiotic prescribing, which would miss antibiotic prescribing that occurred in the absence of a visit or that clinicians phoned into a pharmacy and did not enter into the EHR. Our previous studies found that billing diagnoses and EHR antibiotic prescribing had very good sensitivity and specificity,15 which have only improved in the context of practice policies supporting EHR prescribing. Our examination of aggregate oral antibiotic prescribing for ARIs also could have masked important differences in specific antibiotic choice, like a decrease in broad-spectrum antibiotic prescribing.31,32 Finally, we conducted our study using an advanced, homegrown EHR in use for a minimum of 3 years, in academically affiliated primary care practices.

Under the new meaningful use criterion to “generate lists of patients by specific conditions,” other healthcare providers, systems, and EHRs will implement EHR-based quality reporting differently from our implementation of the ARI Quality Dashboard. The meaningful use Final Rule is nonprescriptive regarding what conditions should be listed, how conditions are to be reported, the manner in which clinicians fulfill the criterion, and the ends to which such lists are generated.33 This lack of specific direction will encourage variety in implementation over the coming years. Unlike our implementation, under meaningful use clinicians will have a direct financial incentive to participate in EHR-based quality reporting. It remains to be seen whether this criterion and financial incentives translate into improved quality of care. Further research needs to explore the environmental context, fidelity of implementation, and other factors that explain the success or failure of health information technology interventions, especially in a rapidly changing environment where it is likely that a combination of factors (eg, payment reform, organizational changes, health information technology tools) may be necessary to see a robust effect on patient outcomes.

Electronic health records have been promoted as part of the solution to improve healthcare quality and as a critical part of practice redesign in the United States.34-37 Unfortunately, EHR use has not been associated with improved quality of care.38-40 Meaningful use requires that providers use the EHR as more than an electronic replacement of the paper chart.41 The meaningful use criteria are supposed to be both “ambitious and achievable,”3 but each of the meaningful use criteria also should be effective. Perhaps even more than the introduction of most new medical treatments, the meaningful use criteria, because of their broad reach, are major healthcare interventions.42,43 Each of the present and forthcoming meaningful use criteria should be rigorously evaluated and shown to be effective in improving quality.

Author Affiliations: From the Division of General Medicine and Primary Care (JAL, JLS, RT, DTY, AJM, BM), Brigham and Women's Hospital, Boston, MA; Department of Medicine (JAL, JLS, MBP, BM), Harvard Medical School, Boston, MA; Clinical and Quality Analysis (DTY, LAV, AJM) and Clinical Informatics Research and Development (RT, MO-Y, BM), Partners HealthCare System, Boston, MA.

Funding Source: This study was supported by grants from the Agency for Healthcare Research and Quality and the National Heart, Lung, and Blood Institute (R01HS015169, K08HS014563, and K08HL072806).

Author Disclosures: Dr Middleton reports having received grants from CDS Consortium (AHRQ) and also reports his board membership with Catholic Health Initiatives. The other authors (JAL, JLS, RT, DTY, LAV, AJM, MBP, MO-Y) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (JAL, JLS, RT, LAV, MBP, MO-Y, BM); acquisition of data (JAL, DTY, AJM, MO-Y); analysis and interpretation of data (JAL, JLS, RT, DTY, LAV, AJM, MBP, MO-Y); drafting of the manuscript (JAL, RT, BM); critical revision of the manuscript for important intellectual content (JAL, JLS, RT, DTY, LAV, MBP, BM); statistical analysis (JAL, DTY, MO-Y); obtaining funding (JAL, JLS, LAV, BM); administrative, technical, or logistic support (JAL, RT, LAV, AJM, BM); and supervision (JAL, JLS, LAV, BM).

Address correspondence to: Jeffrey A. Linder, MD, MPH, Division of General Medicine and Primary Care, Brigham and Women's Hospital, 1620 Tremont St, BC-3-2X, Boston, MA 02120. E-mail: jlinder@partners.org.

1. Bush GW. State of the Union Address. January 31, 2006. http:// georgewbush-whitehouse.archives.gov/stateoftheunion/2006/. Accessed August 10, 2010.

2. Obama B. State of the Union Address. February 24, 2009. http:// www.whitehouse.gov/the_press_office/remarks-of-president-barackobama- address-to-joint-session-of-congress/. Accessed August 10, 2010.

3. Blumenthal D, Tavenner M. The "meaningful use" regulation for electronic health records. N Engl J Med. 2010;363(6):501-504.

4. Cherry DK, Hing E, Woodwell DA, Rechtsteiner EA. National Ambulatory Medical Care Survey: 2006 summary. Natl Health Stat Report. 2008 Aug 6;(3):1-39.

5. Steinman MA, Gonzales R, Linder JA, Landefeld CS. Changing use of antibiotics in community-based outpatient practice, 1991-1999. Ann Intern Med. 2003;138(7):525-533.

6. Grijalva CG, Nuorti JP, Griffin MR. Antibiotic prescription rates for acute respiratory tract infections in US ambulatory settings. JAMA. 2009;302(7):758-766.

7. Gonzales R, Malone DC, Maselli JH, Sande MA. Excessive antibiotic use for acute respiratory infections in the United States. Clin Infect Dis. 2001;33(6):757-762.

8. Linder JA. Editorial commentary: antibiotics for treatment of acute respiratory tract infections: decreasing benefit, increasing risk, and the irrelevance of antimicrobial resistance. Clin Infect Dis. 2008;47(6):744-746.

9. Ranji SR, Steinman MA, Shojania KG, et al. Closing the quality gap: a critical analysis of quality improvement strategies technical review. Antibiotic Prescribing Behavior. Rockville, MD: Agency for Healthcare Research and Quality; January 2006.

10. Gonzales R, Bartlett JG, Besser RE, et al. Principles of appropriate antibiotic use for treatment of acute respiratory tract infections in adults: background, specific aims, and methods. Ann Intern Med. 2001;134(6): 479-486.

11. Linder JA, Jung E, Housman D, et al. The Acute Respiratory Infection Quality Dashboard: a performance measurement reporting tool in an electronic health record. AMIA Annu Symp Proc. 2007:1035.

12. Linder JA, Schnipper JL, Palchuk MB, Einbinder JS, Li Q, Middleton B. Improving care for acute and chronic problems with Smart Forms and Quality Dashboards. AMIA Annu Symp Proc. 2006:1193.

13. Olsha-Yehiav M, Einbinder JS, Jung E, et al. Quality Dashboards: technical and architectural considerations of an actionable reporting tool for population management. AMIA Annu Symp Proc. 2006:1052.

14. Hueston WJ, Slott K. Improving quality or shifting diagnoses? What happens when antibiotic prescribing is reduced for acute bronchitis? Arch Fam Med. 2000;9(9):933-935.

15. Linder JA, Bates DW, Williams DH, Connolly MA, Middleton B. Acute infections in primary care: accuracy of electronic diagnoses and electronic antibiotic prescribing. J Am Med Inform Assoc. 2006;13(1): 61-66.

16. Pan Q, Ornstein S, Gross AJ, et al. Antibiotics and return visits for respiratory illness: a comparison of pooled versus hierarchical statistical methods. Am J Med Sci. 2000;319(6):360-365.

17. Avorn J, Solomon DH. Cultural and economic factors that (mis)shape antibiotic use: the nonpharmacologic basis of therapeutics. Ann Intern Med. 2000;133(2):128-135.

18. Linder JA, Singer DE. Desire for antibiotics and antibiotic prescribing for adults with upper respiratory tract infections. J Gen Intern Med. 2003;18(10):795-801.

19. Gonzales R, Steiner JF, Maselli J, Lum A, Barrett PH Jr. Impact of reducing antibiotic prescribing for acute bronchitis on patient satisfaction. Eff Clin Pract. 2001;4(3):105-111.

20. Tomii K, Matsumura Y, Maeda K, Kobayashi Y, Takano Y, Tasaka Y. Minimal use of antibiotics for acute respiratory tract infections: validity and patient satisfaction. Intern Med. 2007;46(6):267-272.

21. Pawlson LG. The past as prologue: future directions in clinical performance measurement in ambulatory care. Am J Manag Care. 2007; 13(11):594-596.

22. Greene RA, Beckman H, Chamberlain J, et al. Increasing adherence to a community-based guideline for acute sinusitis through education, physician profiling, and financial incentives. Am J Manag Care. 2004; 10(10):661-662.

23. Aspinall SL, Metlay JP, Maselli JH, Gonzales R. Impact of hospital formularies on fluoroquinolone prescribing in emergency departments. Am J Manag Care. 2007;13(5):241-248.

24. Centers for Medicare & Medicaid Services. Evaluation & Management Services Guide. July 2009. https://www.cms.gov/MLNProducts/downloads/ eval_mgmt_serv_guide.pdf. Accessed August 13, 2010.

25. Zhang Y, Lee BY, Donohue JM. Ambulatory antibiotic use and prescription drug coverage in older adults. Arch Intern Med. 2010;170(15):1308-1314.

26. Linder JA. Health information technology as a tool to improve care for acute respiratory infections. Am J Manag Care. 2004;10(10):661-662.

27. Javitt JC, Steinberg G, Locke T, et al. Using a claims data-based sentinel system to improve compliance with clinical guidelines: results of a randomized prospective study. Am J Manag Care. 2005;11(2):93-102.

28. Samore MH, Bateman K, Alder SC, et al. Clinical decision support and appropriateness of antimicrobial prescribing: a randomized trial. JAMA. 2005;294(18):2305-2314.

29. Linder JA, Schnipper JL, Tsurikova R, et al. Documentation-based clinical decision support to improve antibiotic prescribing for acute respiratory infections in primary care: a cluster randomised controlled trial. Inform Prim Care. 2009;17(4):231-240.

30. Steinman MA, Yang KY, Byron SC, Maselli JH, Gonzales R. Variation in outpatient antibiotic prescribing in the United States. Am J Manag Care. 2009;15(12):861-868.

31. Gill JM, Fleischut P, Haas S, Pellini B, Crawford A, Nash DB. Use of antibiotics for adult upper respiratory infections in outpatient settings: a national ambulatory network study. Fam Med. 2006;38(5):349-354.

32. Steinman MA, Landefeld CS, Gonzales R. Predictors of broadspectrum antibiotic prescribing for acute respiratory tract infections in adult primary care. JAMA. 2003;289(6):719-725.

33. Department of Health and Human Services, Centers for Medicare & Medicaid Services. Medicare and Medicaid Programs; Electronic Health Record Incentive Programs; Final Rule. 42 CFR. 2010:44313-44588.

34. Reid RJ, Fishman PA, Yu O, et al. Patient-centered medical home demonstration: a prospective, quasi-experimental before and after evaluation. Am J Manag Care. 2009;15(9):e71-e87.

35. Nutting PA, Miller WL, Crabtree BF, Jaen CR, Stewart EE, Stange KC. Initial lessons from the first national demonstration project on practice transformation to a patient-centered medical home. Ann Fam Med. 2009;7(3):254-260.

36. Bates DW, Bitton A. The future role of health information technology in the patient centered medical home. Health Aff (Millwood). 2010;29(4): 614-621.

37. Cusack CM, Knudson AD, Kronstadt JL, Singer RF, Brown AL. Practice-Based Population Health: Information Technology to Support Transformation to Proactive Primary Care. (Prepared for the AHRQ National Resource Center for Health Information Technology under Contract No. 290-04-0016.) Rockville, MD: Agency for Healthcare Research and Quality; 2010. AHRQ publication 10-0092-EF.

38. Linder JA, Ma J, Bates DW, Middleton B, Stafford RS. Electronic health record use and the quality of ambulatory care in the United States. Arch Intern Med. 2007;167(13):1400-1405.

39. Keyhani S, Hebert PL, Ross JS, Federman A, Zhu CW, Siu AL. Electronic health record components and the quality of care. Med Care. 2008;46(12):1267-1272.

40. DesRoches CM, Campbell EG, Vogeli C, et al. Electronic health records: limited successes suggest more targeted uses. Health Aff (Millwood). 2010;29(4):639-646.

41. Hersh WR. Adding value to the electronic health record through secondary use of data for quality assurance, research, and surveillance. Am J Manag Care. 2007;13(6 part 1):277-278.

42. Kmetik KS, Chung J, Sims S. The performance of performance measures. Am J Manag Care. 2007;13(10):547-549. 43. Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007;357(6):608-613.

© 2024 MJH Life Sciences
AJMC®
All rights reserved.