Using laboratory and administrative data, large managed care organizations can assign severity of illness scores to patients with pneumonia for risk adjustment and reporting.
Objective: To describe the development and assessment of the Abbreviated Fine Severity Score (AFSS), a simplified version of the Pneumonia Severity Index (PSI) suitable for providing risk-adjusted reports to clinicians caring for patients hospitalized with communityacquired pneumonia.
Study Design: Retrospective cohort study.
Methods: We defined the AFSS based on data available in administrative and laboratory databases. We downloaded and linked these hospitalization and laboratory data from 2 cohorts (11,030 patients and 6147 patients) hospitalized with community-acquired pneumonia in all Kaiser Permanente Medical Care Program hospitals in northern California. We then assessed the relationship between the AFSS and mortality, length of stay, intensive care unit admission, and the use of assisted ventilation. Using logistic regression analysis, we assessed the performance of the AFSS and determined the area under the receiver operating characteristic curve (c statistic). Using a combination of manual and electronic medical record review, we compared the AFSS with the full PSI in 2 subsets of patients in northern California and Denver, Colorado, whose medical records were manually reviewed.
Results: The AFSS compares favorably with the PSI with respect to predicting mortality. It has good discrimination with respect to inhospital (c = 0.74) and 30-day (c = 0.75) mortality. It also correlates strongly with the PSI (r = 0.87 and r = 0.93 in the 2 medical record review subsets).
Conclusions: The AFSS can be used to provide clinically relevant risk-adjusted outcomes reports to clinicians in an integrated healthcare delivery system. It is possible to apply risk-adjustment methods from research settings to operational ones.
(Am J Manag Care. 2008;14(3):158-166)
Risk adjustment using physiologic data has been limited to intensive care unit admissions or to research studies.
It is possible for integrated healthcare delivery systems to conduct risk adjustment for hospitalized patients with community-acquired pneumonia.
The Abbreviated Fine Severity Score (AFSS) has good discrimination and has the advantage of incorporating laboratory results from automated databases. It compares well with the Pneumonia Severity Index, which requires manual medical record review.
The use of the AFSS is an intermediate strategy because the use of more complex severity scores will be possible once fully automated medical records are available.
Risk adjustment using physiologic data has been limited to intensive care unit admissions or research studies.
More than 4 million cases of community-acquired pneumonia (CAP) occur in the United States each year, with 1.3 million patients hospitalized.1-3 CAP remains the leading infectious cause of mortality in the United States.4 Its incidence among older persons is 18.3 cases per 10005 and accounts for almost 7% of all US patient hospital costs.6,7 CAP is also the most common cause of severe sepsis.8
Given its importance, managed care organizations seeking to improve quality of care must find ways of addressing practice variation in the management of CAP. The presence and persistence of practice variation have undermined the credibility of hospitals and physicians with purchasers and with the public.9-12 A major challenge facing managed care organizations seeking to reduce practice variation is how to respond to clinicians’ concerns regarding differences in patient illness severity. The most commonly used administrative data sources for risk adjustment are hospital discharge abstracts, which are based on International Classification of Diseases (ICD) codes (usually grouped into diagnosis-related groups13). These readily available sources have 2 major disadvantages. First, they use information that is unavailable at the time of clinical decision making (eg, an ICD code that indicates that a patient experienced assisted ventilation). Second, they do not contain information about a patient’s physiologic state. Research investigations address these limitations by incorporating data acquired through manual medical record review such as vital signs and laboratory test results.13,14 However, because of the high costs of acquiring such data, it is difficult for managed care organizations to incorporate them into routine reports provided to clinicians.
A few recent studies have shown that it is possible to assign abbreviated forms of severity scores using laboratory data from automated databases15,16 or from manual data collection in combination with electronic data.17 Render et al16,18 developed a method for risk adjusting adult intensive care unit (ICU) outcomes that combines laboratory data with administrative data, while Graham and Cook17 combined manually acquired diagnostic and electronically captured laboratory data. More recently, Pine et al14 highlighted the value of supplementing condition-specific riskadjustment models with laboratory data. Although it is clear that incorporation of vital signs, radiologic findings, and other findings of the physical examination in severity scores is desirable, many hospitals do not yet have ready access to these data in electronic format. Consequently, it seems reasonable to use these abbreviated scores during this transitional phase in medicine.
In this article, we describe the development and assessment of such a transitional tool, an abbreviated version of an existing severity of illness score, the Pneumonia Severity Index (PSI).19,20 Our Abbreviated Fine Severity Score (AFSS) combines available laboratory data with administrative data and is used for routine reporting to clinicians at 18 hospitals in an integrated healthcare delivery system, the Kaiser Permanente Medical Care Program (KPMCP). We chose this score as our starting point because of the clinical importance of CAP and because the PSI is an integral part of the KPMCP CAP clinical practice guideline.21
MATERIALS AND METHODS
We developed the AFSS using hospitalization data from 16 Northern California KPMCP hospitals between January 1, 2000, and March 30, 2002 (cohort 1). All of these facilities use the same comprehensive information systems linked by a common medical record number. By the time that ongoing operational reporting using the AFSS became routine, an additional 2 hospitals were in operation. Therefore, cohort 2 consisted of CAP admissions between July 1, 2004, and June 30, 2005, at 18 KPMCP hospitals in northern California.
Limited resources allowed review of medical records for only the following 3 patient subsets: (1) We compared the electronically assigned AFSS with the PSI based on manual medical record review in the Colorado region of the KPMCP, which has 398,706 members, including admissions to Exempla St Joseph Hospital (ESJH) in Denver, Colorado (which is the primary contract hospital for KPMCP patients in Denver), and the Kaiser Permanente outpatient clinics from which patients were referred between November 1, 2004, and April 22, 2005. (2) We performed a similar medical record review using randomly selected hospitalizations from cohort 1 at the KMPCP Oakland Medical Center. (3) We reviewed a randomly selected group of patients from cohort 1 to assess the accuracy of the diagnosis of pneumonia.
We obtained approval from the institutional review boards for the protection of human subjects in the northern California and Colorado regions of the KPMCP. The approval included a waiver of individual informed consent.
Identification of Patients With CAP
The PSI uses 19 predictors (eg, arterial pH and altered mental status) and has a maximum value of 285 plus the patient’s age in years ().20 Of these 19 predictors, the following 7 are not readily available in hospital discharge abstracts or in laboratory databases and could not be included: whether a patient was a nursing home resident, 5 physical examination components (eg, respiratory rate), and the presence of pleural effusion. The remaining 12 predictors, which constitute the AFSS, are available in KPMCP databases. Based on the PSI scoring scheme, these 12 items permit a maximum number of points equaling 180 plus the patient’s age in years.
We scanned the KPMCP and ESJH hospitalization and laboratory databases and downloaded all relevant test results obtained on a patient during the 24-hour period preceding hospital admission and linked these to the electronic discharge abstracts. In cases in which more than 1 test of a given type was obtained in the time frame, we selected the test result that would give the highest point assignment. If an individual test result was not obtained for a patient (eg, arterial pH), it was imputed as normal, and 0 points were assigned.
We ascertained inhospital mortality, assisted ventilation, ICU admission, and total hospital length of stay (LOS) from KPMCP and ESJH databases in northern California and Denver. To ascertain 30-day mortality, we linked KPMCP records to California or Colorado death certificates and to publicly available Medicare files using previously described methods.28
We ascertained the use of assisted ventilation based on ICD procedure codes 96.7 (other continuous mechanical ventilation), 96.70 (continuous mechanical ventilation of unspecified duration), 96.71 (continuous mechanical ventilation <96 hours), or 96.72 (continuous mechanical ventilation ≥96 hours). To establish total LOS for patients transferred between hospitals, we linked records involving multiple hospital stays, so LOS is defined as the exact time in days and hours between the first hospital admission in a linked hospital stay and the final discharge to home or a skilled nursing facility or death.
Manual Medical Record Abstraction
Because the Colorado KPMCP had an operational outpatient electronic medical record during the study period, we could manually audit all of the Denver patients’ outpatient notes before admission, admission histories and physical examination findings, and dictated discharge summaries. These latter 2 items are retrievable through the electronic medical record.
To compare the AFSS and the PSI in Northern California KPMCP patients, we randomly selected 200 CAP hospitalizations that began at the KPMCP Oakland hospital between January 1, 2000, and March 30, 2002. Of these, 100 were randomly selected hospitalizations in which the patient died or was admitted to the ICU, and 100 were randomly selected hospitalizations in which the patient survived and did not experience ICU admission.
We randomly selected 70 northern California records to verify the CAP diagnosis. We manually reviewed Northern California KPMCP radiology databases, outpatient diagnosis databases, and electronic discharge summaries to confirm the CAP diagnosis by the presence of a new infiltrate on a chest radiograph within 48 hours of admission or by the admitting physician’s physical examination findings. If confirmation or refutation of the diagnosis of pneumonia was not possible in this fashion, we manually reviewed the paper hospital medical records.
During our initial development phase, we did not test the performance of the AFSS as a simple aggregate score but instead grouped the laboratory and comorbidity points into subscores. We did this to educate our target audience (chiefs of hospital-based medicine and ICU directors) with respect to the relative contribution of different factors to patient outcomes. Production reports use only aggregate scores.
We used logistic regression analysis to examine each dichotomous outcome (death, admission to the ICU, and the use of mechanical ventilation) relative to the full AFSS or its components. We assessed the performance of these models using the area under the receiver operating characteristic curve, or c statistic.29 We used the likelihood ratio χ2 to estimate the relative contribution of each model predictor. We calculated the marginal increase in this χ2 statistic accounted for by each predictor as it was added and removed from the full model.16,30 We used linear regression analysis and the Pearson product moment correlation coefficient to compare the AFSS with the PSI.
Between July 1, 2004, and June 30, 2005, in northern California, 6147 patients met the inclusion criteria. Among these patients, the inhospital mortality was 7.7%, while the 30-day mortality was 13.4%. Seven hundred six patients (11.5%) were admitted to the ICU, and 233 patients (3.8%) experienced assisted ventilation. Among survivors, the median LOS was 3.7 days, while the mean LOS was 5.3 days.
Patient Subsets With Manual Medical Record Review
gives the characteristics of cohort 1, in which the AFSS scores ranged from 8 to 191, with a median score of 84 and a mean score of 84.6. Table 3 gives the results of the logistic regression model in which the outcome was mortality and the predictors were age, sex, laboratory subscore, and comorbidity subscore. This model predicted mortality with c statistics of 0.74 for inhospital mortality and 0.75 for 30-day mortality. Examination of observed and expected mortality rates across different risk groups showed that the models were well calibrated until they reached a predicted mortality risk of 35%. Above this predicted risk, the number of patients was small. When considering inhospital death, 157 patients (1.4% of all patients) had a predicted mortality risk of at least 35%, and the number of deaths in this group was 48 (30.6%); this group of patients had a mean (SD) age of 83.5 (8.1) years. Their mean (SD) AFSS was 156.1 (10.1). In the model for 30-day mortality, 810 patients (7.3% of all patients) had a predicted mortality risk of at least 35%, and the number of deaths in this group was 313 (38.6%). This group of patients had a mean (SD) age of 84.6 (7.7) years. Their mean (SD) AFSS was 134.9 (13.8).
The PSI was the result of a federally funded research effort focusing on CAP that included quantitative assessment of predictors for hospital admission and discharge.19,20,31-36 The PSI has been used to compare antibiotic therapies, need for different degrees of treatment intensity, and efficacy of diagnostic and treatment strategies. 37-42 In our context, developing a riskadjustment method based on the PSI, rather than a completely recalibrated or a totally new score, was desirable for several reasons. Given the high cost of manual medical record review and the unavailability of vital signs, coded radiologic findings, and other physical examination components in electronic form, adopting the strategy by Render et al16,18 seemed a reasonable way to improve on the existing reporting strategy, which until this point was based entirely on ICD codes. Credibility with clinicians is important, so basing the method on a quantitative tool that had already been endorsed by the department chiefs of internal medicine, emergency medicine, and hospital-based medicine made sense. In addition, using data extracted from the outpatient electronic medical record data repository and from an inpatient data system, we found that the AFSS performed well in Denver. Although the initial investment required in developing the AFSS was substantial because it marked the first time the KMPCP attempted to use laboratory data for routine reporting, subsequent costs have fallen notably, and generating the report now requires only 8 to 12 hours of programmer time per quarter.
The AFSS has important limitations. The most important of these is the loss of discrimination and calibration because of the absence of physical and radiographic findings. In our study, the AFSS did not discriminate as well as the original PSI (c statistic range, 0.83-0.89) or as well as a recalibrated PSI (c = 0.85).43 However, it performed as well as the manually assigned CURB (confusion, urea nitrogen, respiratory rate, and blood pressure) and CURB-65 scores.44,45 As noted by others,46 it is more difficult to predict the need for intensive care and the use of assisted ventilation, and we found that the overall performance of the AFSS with respect to predicting these outcomes is similar to that reported for medical record review–based indexes, including the complete PSI.46 It is important to keep in mind that any score calibrated for a given outcome may not perform as well when used to predict a different outcome, particularly when the different outcome is more dependent on practice style rather than on a discrete physiologic end point (eg, admission to the ICU as opposed to inhospital mortality). The AFSS Hosmer-Lemeshow statistic was significant, and examination of specific risk deciles showed that our abbreviated score did not calibrate as well among patients whose predicted mortality risk exceeded 35%. Examination of this group, which constitutes less than 3% of the cohort, showed that it consisted of a much older and sicker set of patients. This suggests that future predictive models may need to handle older patients differently, perhaps by including age-specific interaction terms for laboratory data and comorbidities, and that collaborative efforts should be made so that greater numbers of these patients are in study datasets to minimize stochastic variation. Given the need to have robust risk-adjustment tools that have credibility with clinicians, we believe that these limitations are offset by the benefits of being able to generate a production report using automated databases.
Reports using the AFSS are not being presented to clinicians as providing conclusive evidence that center x is “better” than center γ. This is important given the sample size and methodological limitations noted by other investigators,47,48 as well as the limitations of our abbreviated score and the original PSI, which may need to be modified to include other predictors.49 Instead, the AFSS and reports based on it are being used (1) as directional indicators that can motivate collaboration between hospitals by serving as a starting point for further investigation of interhospital differences, (2) as a risk-adjustment tool for performing “mini case-control studies” using multivariate template matching,50 and (3) as an aid to estimating sample sizes for other clinical investigations in which it is useful to have a sense of the overall severity distribution in a target population. Moreover, the AFSS is not being used for the management of or for any critique of the management of individual patients.
A quarterly report showing interfacility comparisons among different types of patients with CAP entered production in the Northern California KPMCP in March 2006. Since the availability of this report, clinicians in the KPMCP have been using it to assess their management of CAP among patients at different risk levels. For example, a facility whose CAP mortality is low but whose CAP LOS is high is reviewing randomly selected medical records using stratified random sampling based on the AFSS. Clinicians have requested additional analyses and changes in the report. Figure 2 shows a new graphic, similar to one developed by Render et al 18 for ICU reporting for Veterans Affairs hospitals, that will be incorporated into the production report beginning in 2008. This format permits clinicians to compare risk-adjusted mortality and LOS in survivors. Clinicians also requested that we provide a rough equivalency table comparing the AFSS with the PSI risk classes. The Appendix (available at www.ajmc.com) provides data showing the performance of the AFSS among the later Northern California KPMCP cohort using strata that approximate 5 risk classes based on mortality ranges in the original studies by Fine et al.19,20 Additional details of our chart validation and current reports are also included in the online Appendix.
Obviously, the method we have defined can only be used by entities with the capability to store and retrieve laboratory data using standardized formats, which does not include all hospitals in the United States. On the other hand, increasing numbers of hospitals are acquiring this capability, so it is important for health services researchers to begin defining ways to use these methods operationally. Moreover, given the fact that vital signs, physical examination components, and coded radiologic findings are going to be available electronically in the near future, the AFSS must be considered a transitional tool that will ultimately be replaced by more robust severity scores.
In conclusion, our findings demonstrate that it is possible to incorporate electronically captured laboratory data in risk-adjustment models for patients with CAP. Organizations with integrated information systems (eg, hospital chains) can identify a cohort of patients with CAP and assign abbreviated point scores based on studies originally developed using paper medical records. These tools may then be used for production reports to support quality improvement efforts.
We thank Marcus Lee, Joseph Severson, and Susan Shetterly for their assistance in creating the datasets; Bruce Folck, Peter Scheirer, MA, and Jay Soule for assistance with programming; Claudia Wu for developing the Excel production report; Joseph V. Selby, MD, MPH, Paul Feigenbaum, MD, and Philip Madvig, MD, for reviewing the manuscript; and Amy Opperman and Allison Edwards for medical record audits.
Author Affiliations: Kaiser Permanente Medical Care Program (GJE, BHF, MNG, JYL, PK) and Kaiser Foundation Health Plan (PK), Oakland, CA, Kaiser Permanente Medical Center (GJE), Walnut Creek, CA, and Kaiser Permanente Medical Center (MPC), Vallejo, CA; and Kaiser Permanente of Colorado (TEP), Denver.
Funding Sources: This project was funded by The Permanente Medical Group, Inc, and by the Sidney Garfield Memorial Fund.
Author Disclosures: The authors (GJE, BHF, TEP, MNG, JYL, MPC, PK) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (GJE, BHF, MPC, PK); acquisition of data (TEP, MNG, JYL, MPC); analysis and interpretation of data (GJE, BHF, TEP, MNG, JYL, MPC, PK); drafting of the manuscript (GJE, TEP, MNG, MPC); critical revision of the manuscript for important intellectual content (GJE, BHF, TEP, JYL, MPC, PK); statistical analysis (GJE, BHF, JYL, PK); provision of study materials or patients (TEP); obtaining funding (GJE); administrative, technical, or logistic support (GJE, TEP, MNG); and supervision (GJE, TEP).
Address correspondence to: Gabriel J. Escobar, MD, Kaiser Permanente Medical Care Program, 2000 Broadway, 2nd Fl, Oakland, CA 94612. E-mail: email@example.com.
1. Agency for Health Care Policy and Research. Clinical Classification for Health Policy Research: Hospital Inpatient Statistics. Rockville, MD: Agency for Health Care Policy and Research; 1999:99-134.
3. DeFrances CJ, Podgornik MN. 2004 National Hospital Discharge Survey. Adv Data. 2006, May 4;(371):1-19.
5. Kaplan V, Angus DC, Griffin MF, Clermont G, Scott Watson R, Linde-Zwirble WT. Hospitalized community-acquired pneumonia in the elderly: age- and sex-related patterns of care and outcome in the United States. Am J Respir Crit Care Med. 2002;165(6):766-772.
7. Lave JR, Lin CC, Fine MJ, Hughes-Cromwick P. The cost of treating patients with community-acquired pneumonia. Semin Respir Crit Care Med. 1999;20(3):189-197.
9. Blumenthal D. The variation phenomenon in 1994. N Engl J Med. 1994;331(15):1017-1018.
11. Blumenthal D. Quality of health care, part 4: the origins of the quality-of-care debate. N Engl J Med. 1996;335(15):1146-1149.
13. Iezzoni LI. Risks and outcomes. In: Risk Adjustment for Measuring Healthcare Outcomes. Chicago, IL: Health Administration Press; 1997:1-41.
15. McMahon LF Jr, Hayward RA, Bernard AM, Rosevear JS,Weissfeld LA. APACHE-L: a new severity of illness adjuster for inpatient medical care. Med Care. 1992;30(5):445-452.
17. Graham PL, Cook DA. Prediction of risk of death using 30-day outcome: a practical end point for quality auditing in intensive care. Chest. 2004;125(4):1458-1466. 18. Render ML, Kim HM, Deddens J, et al. Variation in outcomes in Veterans Affairs intensive care units with a computerized severity measure. Crit Care Med. 2005;33(5):930-939.
20. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify lowrisk patients with community-acquired pneumonia. N Engl J Med. 1997;336(4):243-250.
22. Selby JV. Linking automated databases for research in managed care settings. Ann Intern Med. 1997;127(8, pt 2):719-724.
24. Hylek EM, Go AS, Chang Y, et al. Effect of intensity of oral anticoagulation on stroke severity and mortality in atrial fibrillation. N Engl J Med. 2003;349(11):1019-1026.
26. Escobar GJ, Joffe S, Gardner MN, Armstrong MA, Folck BF, Carpenter DM. Rehospitalization in the first two weeks after discharge from the neonatal intensive care unit. Pediatrics. 1999;104(1):e2.
28. Arellano MG, Petersen GR, Petitti DB, Smith RE. The California Automated Mortality Linkage System (CAMLIS). Am J Public Health. 1984;74(12):1324-1330. 29. Harrell FE Jr, Lee KL, Califf RM, Pryor DB, Rosati RA. Regression modelling strategies for improved prognostic prediction. Stat Med. 1984;3(2):143-152.
31. Fine MJ, Singer DE, Hanusa BH, Lave JR, Kapoor WN. Validation of a pneumonia prognostic index using the MedisGroups Comparative Hospital Database. Am J Med. 1993;94(2):153-159.
33. Fine MJ, Hough LJ, Medsger AR, et al. The hospital admission decision for patients with community-acquired pneumonia: results from the Pneumonia Patient Outcomes Research Team cohort study. Arch Intern Med. 1997;157(1):36-44.
35. Halm EA, Fine MJ, Kapoor WN, Singer DE, Marrie TJ, Siu AL. Instability on hospital discharge and the risk of adverse outcomes in patients with pneumonia. Arch Intern Med. 2002;162(11):1278-1284.
37. Atlas SJ, Benzer TI, Borowsky LH, et al. Safely increasing the proportion of patients with community-acquired pneumonia treated as outpatients: an interventional trial. Arch Intern Med. 1998;158(12):1350-1356.
40. Marrie TJ, Lau CY, Wheeler SL,Wong CJ,Vandervoort MK, Feagan BG; CAPITAL Study Investigators. A controlled trial of a critical pathway for treatment of community-acquired pneumonia: Community-Acquired Pneumonia Intervention Trial Assessing Levofloxacin. JAMA. 2000;283(6):749-755.
42. Waterer GW,Wunderink RG. The influence of the severity of community-acquired pneumonia on the usefulness of blood cultures. Respir Med. 2001;95(1):78-82.
44. Lim WS, van der Eerden MM, Laing R, et al. Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study. Thorax. 2003;58(5):377-382.
46. Angus DC, Marrie TJ, Obrosky DS, et al. Severe communityacquired pneumonia: use of intensive care services and evaluation of American and British Thoracic Society Diagnostic criteria. Am J Respir Crit Care Med. 2002;166(5):717-723.
48. Hofer TP, Hayward RA. Identifying poor-quality hospitals: can hospital mortality rates detect quality problems for medical diagnoses? Med Care. 1996;34(8):737-753. 49. Marrie TJ, Huang JQ. Low-risk patients admitted with communityacquired pneumonia. Am J Med. 2005;118(12):1357-1363.