Nationwide data on hospital emergency department visits reveal little evidence of unintended adverse consequences associated with publicly reporting hospitals’ antibiotic timing in pneumonia.
Published Online: February 14, 2009
Mark W. Friedberg, MD, MPP; Ateev Mehrotra, MD, MPH; and Jeffrey A. Linder, MD, MPH
Objective: To determine whether publicly reporting hospital scores on antibiotic timing in pneumonia (percentage of patients with pneumonia receiving antibiotics within 4 hours) has led to unintended adverse consequences for patients.
Study Design: Retrospective analyses of 13,042 emergency department (ED) visits by adult patients with respiratory symptoms in the National Hospital Ambulatory Medical Care Survey, 2001-2005.
Methods: Rates of pneumonia diagnosis, antibiotic use, and waiting times to see a physician were compared before and after public reporting, using a nationally representative hospital sample. These outcomes also were compared between hospitals with different antibiotic timing scores.
Results: There were no differences in rates of pneumonia diagnosis (10% vs 11% of all ED visits, P = .72) or antibiotic administration (34% vs 35%, P = .21) before and after antibiotic timing score reporting. Mean waiting times to be seen by a physician increased similarly for patients with and without respiratory symptoms (11-minute vs 6-minute increase, respectively; P = .29). After adjustment for confounders, hospitals with higher 2005 antibiotic timing scores had shorter mean waiting times for all patients, but there were no significant score-related trends for rates of pneumonia diagnosis or antibiotic use.
Conclusion: Despite concerns, public reporting of hospital antibiotic timing scores has not led to increased pneumonia diagnosis, antibiotic use, or a change in patient prioritization. (Am J Manag Care. 2009;15(2):137--144)
Take-Away Points There has been concern that publicly reporting hospital scores on antibiotic timing in pneumonia (percentage of patients with pneumonia who received antibiotics within 4 hours) has led to unintended adverse consequences for patients.
- A national sample of hospitals revealed little evidence that this public reporting has led to widespread overdiagnosis of pneumonia or inappropriate antibiotic administration.
- Explainable variation in hospitals’ antibiotic timing scores is primarily attributable to differences in patients’ waiting times to see a physician, rather than differences in rates of pneumonia diagnosis or antibiotic administration.
- Future monitoring of the effects of public reporting programs may provide valuable guidance to policy makers, especially in areas of controversy.
To encourage improvement in hospitals’ quality of care, the Hospital Quality Alliance (HQA) began an initiative to collect and publicly report hospital-level performance on 10 quality measures in 2004.1-3 More than 98% of US acute care hospitals supply performance data to the HQA,4,5 but concerns have been raised about potential unintended consequences of public reporting.6-8
Hospitals’ responses to the HQA measure “Initial Antibiotic Received within 4 Hours of Hospital Arrival” have been of particular concern. Hospitals feeling pressure to improve antibiotic timing performance could potentially “play for the test” by encouraging the premature (and potentially inaccurate) diagnosis of pneumonia, giving antibiotics indiscriminately to patients with respiratory symptoms, or inappropriately prioritizing patients likely to have pneumonia ahead of others whose medical conditions may be more urgent.9-14
Prior studies from single institutions and self-selected hospitals participating in a pay-for-performance pilot program suggest that incentives tied to pneumonia antibiotic timing scores have led to increased rates of inaccurate pneumonia diagnosis and inappropriate antibiotic administration in emergency departments (EDs).15-17 However, whether public reporting on antibiotic timing has had similar effects on a national scale is unknown.
Because antibiotic timing scores are thought to reflect care delivered in EDs,18 we used a national database of ED visits and compared the care for patients with respiratory symptoms before and after the start of public reporting. We assessed whether these patients were more likely to be diagnosed with pneumonia, to be prescribed antibiotics, and to have shorter waiting times to see a physician (compared with patients who did not have respiratory symptoms, reflecting patient prioritization). To test the hypothesis that hospitals with higher scores were “playing for the test,” we also assessed differences on these 3 measures between hospitals scoring higher and lower on the pneumonia antibiotic timing measure.
METHODS AND MATERIALS
Emergency Department Visit Data
We used patient visit data from the Emergency Department module of the nationally representative National Hospital Ambulatory Medical Care Survey (NHAMCS), which is administered annually by the National Center for Health Statistics (NCHS).19 A rotating panel of nonfederal, general, and short-stay hospitals participate in the NHAMCS, and visits from each hospital in the panel are sampled approximately every 15 months.
Trained hospital staff record patient and clinical data on standardized forms for each visit. Patient data include demographic information, expected source of payment, and nursing home residence. Clinical data include up to 3 reasons for visit, up to 3 physician diagnoses (in the ED), up to 8 medications administered during the ED visit (but not the timing of medication administration), waiting time to see a physician (2003-2005 only), triage vital signs, and orientation to person, place, and time.20
The NHAMCS Emergency Department module collected 182,332 patient visit records between 2001 and 2005. Among contacted hospitals over the study period, 90% to 95% participated. Visits are weighted to allow extrapolation of survey results to national estimates.19 For our analysis the NCHS created anonymous hospital identifiers that allowed longitudinal tracking of each participating hospital. The NCHS institutional review board approved all NHAMCS protocols, and the confidentiality of the data is protected by law.21
Hospital Antibiotic Timing Scores and Hospital Characteristics
We obtained publicly available hospital-level HQA performance data on the timing of initial antibiotics delivered to patients admitted with pneumonia for 2004 and 2005. Hospital scores were calculated as the percentage of adult patients discharged from the hospital with a diagnosis of pneumonia who received their first dose of antibiotics within 4 hours of hospital arrival. Detailed specifications for this measure are available elsewhere.22
We also linked hospitals’ pneumonia antibiotic timing scores to other hospital characteristics obtained from the 2005 database of the American Hospital Association: number of beds, geographic region, urban location, ownership (for-profit, not-for-profit, and government), status of membership in the Council of Teaching Hospitals, and percentage of patients covered by Medicare and Medicaid. Of the 507 unique hospitals participating in the NHAMCS Emergency Department module during 2001-2005, NCHS staff matched 503 (99%) to their HQA pneumonia antibiotic timing scores and other corresponding hospital characteristics.
Consistent with prior literature, we included for analysis only hospitals reporting stable antibiotic timing scores, defined as scores calculated using at least 25 patient discharges during the year 2005.2 Of the 503 NHAMCS sample hospitals, 118 (23%) were excluded based on this criterion.
Our study population included ED visits during 2001-2005 by patients age 18 years and older whose primary reason for visit was “symptoms referable to the respiratory system” or “diseases of the respiratory system,” excluding conditions limited to the upper respiratory tract (eg, nasal congestion). Among included visits, the most common specific reasons for visit were cough (50%), shortness of breath (24%), and “labored or difficult breathing” (11%). In supplementary analyses, inclusion of visits for upper respiratory conditions did not substantively alter our results. Visits were included regardless of patients’ dispositions at the end of each ED visit (eg, admitted to hospital, transferred, discharged to home). Supplementary analyses limited to visits resulting in hospital admission did not substantively alter our results.
Outcome Variables: Processes of Emergency Department Care
We had 3 major outcome variables: ED diagnosis, antibiotic use, and waiting time to see a physician. We used International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes to classify ED diagnoses as pneumonia (with the same codes used for antibiotic timing score reporting),22 bronchitis, congestive heart failure, or other. It was possible for a single visit to carry more than 1 of these diagnoses, and we counted visits receiving diagnoses in more than 1 category toward the total in each applicable diagnostic category.
Antibiotic use was identified using the NCHS drug classification system.20,23 As in previous studies, we classified antibiotic use in visits for asthma and congestive heart failure as inappropriate when pneumonia was not also an ED diagnosis.16 In supplementary analyses we also included antibiotic use in bronchitis as inappropriate without substantive changes to the results.
We performed 2 main comparisons. First, we analyzed nationwide longitudinal trends and differences in the outcome variables (ED diagnosis, antibiotic use, and waiting time to see a physician) before and after the start of public score reporting among ED visits for respiratory symptoms. We designated January 1, 2004, as the first day of the public reporting period because this was the first day of care that could contribute to publicly available antibiotic timing scores. Because the antibiotic timing measure was first published in October 2003,4 we designated October 1, 2003, as the first day of the reporting period in supplementary analyses with substantively similar results.
Second, we conducted a cross-sectional analysis of relationships between 2005 antibiotic timing scores and the outcome variables, restricting our analysis to ED visits in the public reporting period (2004-2005). We hypothesized that if hospitals were “playing for the test” in order to raise their scores, patients with respiratory symptoms visiting hospitals with the highest 2005 scores would experience the highest rates of pneumonia diagnosis, the highest rates of antibiotic use, and (relative to patients without respiratory symptoms) the shortest waiting times.
We assessed relationships between categorical variables using the χ2 test. Due to the nonnormal distribution of waiting time to see a physician, waiting times were modeled using generalized log-linear regression.
PDF is available on the last page.