Currently Viewing:
The American Journal of Managed Care January 2018
Measuring Overuse With Electronic Health Records Data
Thomas Isaac, MD, MBA, MPH; Meredith B. Rosenthal, PhD; Carrie H. Colla, PhD; Nancy E. Morden, MD, MPH; Alexander J. Mainor, JD, MPH; Zhonghe Li, MS; Kevin H. Nguyen, MS; Elizabeth A. Kinsella, BA; and Thomas D. Sequist, MD, MPH
The Health Information Technology Special Issue: Has IT Become a Mandatory Part of Health and Healthcare?
Jacob Reider, MD
Bridging the Digital Divide: Mobile Access to Personal Health Records Among Patients With Diabetes
Ilana Graetz, PhD; Jie Huang, PhD; Richard J. Brand, PhD; John Hsu, MD, MBA, MSCE; Cyrus K. Yamin, MD; and Mary E. Reed, DrPH
Electronic Health Record "Super-Users" and "Under-Users" in Ambulatory Care Practices
Juliet Rumball-Smith, MBChB, PhD; Paul Shekelle, MD, PhD; and Cheryl L. Damberg, PhD
Electronic Sharing of Diagnostic Information and Patient Outcomes
Darwyyn Deyo, PhD; Amir Khaliq, PhD; David Mitchell, PhD; and Danny R. Hughes, PhD
Hospital Participation in Meaningful Use and Racial Disparities in Readmissions
Mark Aaron Unruh, PhD; Hye-Young Jung, PhD; Rainu Kaushal, MD, MPH; and Joshua R. Vest, PhD, MPH
A Cost-Effectiveness Analysis of Cardiology eConsults for Medicaid Patients
Daren Anderson, MD; Victor Villagra, MD; Emil N. Coman, PhD; Ianita Zlateva, MPH; Alex Hutchinson, MBA; Jose Villagra, BS; and J. Nwando Olayiwola, MD, MPH
Currently Reading
Electronic Health Record Problem Lists: Accurate Enough for Risk Adjustment?
Timothy J. Daskivich, MD, MSHPM; Garen Abedi, MD, MS; Sherrie H. Kaplan, PhD, MPH; Douglas Skarecky, BS; Thomas Ahlering, MD; Brennan Spiegel, MD, MSHS; Mark S. Litwin, MD, MPH; and Sheldon Greenfield, MD
Hospitalized Patients' and Family Members' Preferences for Real-Time, Transparent Access to Their Hospital Records
Michael J. Waxman, MD, MPH; Kurt Lozier, MBA; Lana Vasiljevic, MS; Kira Novakofski, PhD; James Desemone, MD; John O'Kane, RRT-NPS, MBA; Elizabeth M. Dufort, MD; David Wood, MBA; Ashar Ata, MBBS, PhD; Louis Filhour, PhD, RN; & Richard J. Blinkhorn Jr, MD

Electronic Health Record Problem Lists: Accurate Enough for Risk Adjustment?

Timothy J. Daskivich, MD, MSHPM; Garen Abedi, MD, MS; Sherrie H. Kaplan, PhD, MPH; Douglas Skarecky, BS; Thomas Ahlering, MD; Brennan Spiegel, MD, MSHS; Mark S. Litwin, MD, MPH; and Sheldon Greenfield, MD
Electronic health record (EHR)-based comorbidity assessment had low sensitivity for identifying major comorbidities and poorly predicted survival. EHR-based comorbidity data require validation prior to application to risk adjustment.
ABSTRACT

Objectives: To determine whether comorbidity information derived from electronic health record (EHR) problem lists is accurate. 

Study Design: Retrospective cohort study of 1596 men diagnosed with prostate cancer between 1998 and 2004 at 2 Southern California Veterans Affairs Medical Centers with long-term follow-up. 

Methods: We compared EHR problem list–based comorbidity assessment with manual review of EHR free-text notes in terms of sensitivity and specificity for identification of major comorbidities and Charlson Comorbidity Index (CCI) scores. We then compared EHR-based CCI scores with free-text–based CCI scores in prediction of long-term mortality. 

Results: EHR problem list–based comorbidity assessment had poor sensitivity for detecting major comorbidities: myocardial infarction (8%), cerebrovascular disease (32%), diabetes (46%), chronic obstructive pulmonary disease (42%), peripheral vascular disease (31%), liver disease (1%), and congestive heart failure (23%). Specificity was above 94% for all comorbidities. Free-text–based CCI scores were predictive of long-term other-cause mortality, whereas EHR problem list–based scores were not. 

Conclusions: Inaccuracies in EHR problem list–based comorbidity data can lead to incorrect determinations of case mix. Such data should be validated prior to application to risk adjustment. 

Am J Manag Care. 2018;24(1):e24-e29
Takeaway Points
Electronic health record (EHR)-based data are increasingly being used for purposes of risk adjustment, but our findings show that the Veterans Affairs EHR problem list led to incorrect determinations of case mix and did not predict survival, in contrast to chart-based comorbidity assessment. Given the increasing importance of EHR data, we believe that: 
  • EHR-based comorbidity data should be validated prior to application to risk adjustment. 
  • Other sources of comorbidity information, such as patient-reported measures of health status or natural language processing–derived data, may be considered for risk adjustment in comparisons of performance.
Recent study results highlight the inconsistency of different sources of data (eg, registries, claims, the electronic health record [EHR]) for identifying basic health information, such as major comorbidities.1-5 For individual physicians, these inconsistencies are less relevant because they have the opportunity to confirm this information directly with the patient. However, when used for risk adjustment for purposes of performance assessment,6-8 incorrect data may lead to misclassification and unfair comparisons. This is a major concern for health systems participating in alternative payment models, which base some portion of reimbursement on risk-adjusted quality outcomes.9-13 Comorbidity is a key component of the risk adjustment needed for fair comparisons of measures of quality, as comorbid disease burden affects readmissions,14-16 complications,17-19 quality of life,20,21 and mortality.22-24 Accurately quantifying comorbidity requires varying degrees of detail regarding number, severity, or types of conditions, depending on the measure used.25 All of these measures require robust data sources to identify the presence or absence of each of the included comorbid conditions. 

Whereas comorbidity data from inpatient medical records are reviewed by trained coders, outpatient records may be less reliable,1-5 as they often rely on “problem lists” in the EHR to identify the index condition. The problem list is a compilation of patient diagnoses entered by clinicians during patient encounters and updated at varying intervals. With increasing numbers of institutions and office-based practices using EHRs to store patient data, interest has grown in utilizing these lists as a source of comorbidity data.26 However, it is unclear whether the data in these lists are sufficiently accurate to assess patients’ total comorbid disease burden. Recent studies have attempted to validate the accuracy of the problem list by comparing it with other diagnosis lists or with short-term future proximate outcomes, such as glycated hemoglobin.1 However, the most appropriate metric for assessment of validity of these lists is long-term mortality, especially in an elderly population; a longer list of major comorbidities should be strongly associated with higher mortality. 

In this study, we compared the ability of Charlson Comorbidity Index (CCI) scores derived from the Veterans Affairs (VA) problem list to predict mortality in an elderly population with a gold standard for comorbidity assessment, manual abstraction directly from the physician’s free-text notes. We captured mortality over a 10-year period, which was long enough to reveal the impact of even minor comorbidities over time. Because the problem list is not actively maintained, we hypothesized that the problem list would provide poor accuracy in identifying comorbidities and would poorly predict survival compared with free-text–based assessment. 

METHODS

Data Sources and Study Participants

We used the California Cancer Registry to identify men newly diagnosed with prostate cancer at the Greater Los Angeles and Long Beach VA Medical Centers between 1998 and 2004 (N = 1915). We reviewed EHRs for sociodemographic, tumor risk, comorbidity, and survival data and identified all men with sufficient data to determine comorbidity and survival (n = 1596). Institutional review board approval was granted by the University of California, Los Angeles, and both VA Medical Centers. 

Variables

Comorbidity. We assessed comorbid disease burden at the time of prostate cancer diagnosis using 2 sources of data: 1) the EHR free-text notes record and 2) the EHR problem list. The interdisciplinary EHR free-text notes record contained outpatient and inpatient notes from all clinical encounters. Data from the medical record within 12 months of the diagnosis of prostate cancer were used for free-text–based comorbidity assessment. Comorbidities were coded according to the definitions originally indicated by Charlson et al,22 and age-unadjusted CCI scores were calculated. Because free-text–based comorbidity assessment was conducted primarily by 1 author (TJD), reliability was assessed on a random 5% subset of the sample by a separate author (GA). Interrater agreement in Charlson scores was 77.5% and the associated kappa statistic was 0.67. 

The VA problem list was populated and updated by clinicians variably as they deemed appropriate; diagnoses were coded by International Classification of Diseases, Ninth Revision (ICD-9) codes and dates and entered into a retrievable database. Comorbidities added to the problem list up to 12 months after the diagnosis of prostate cancer were used for EHR problem list–based comorbidity assessment. Comorbidities were coded according to the claims-based definitions indicated by Deyo et al,27 and age-unadjusted Deyo-CCI scores were calculated. Neither prostate cancer nor any complications of prostate cancer were included in comorbidity scoring for either comorbidity assessment method.

Survival model covariates. Age at diagnosis was coded as a continuous variable. Race/ethnicity was coded as Caucasian, African American, Hispanic, or other. Tumor characteristics included prostate-specific antigen (PSA), Gleason sum, and clinical tumor (T), node, and metastasis stage at diagnosis. Categories for PSA, Gleason sum, and clinical T stage were defined by the widely accepted D’Amico criteria, which have been shown to predict overall and cancer-specific mortality.28,29 

Mortality. Survival was measured from date of treatment until date of death. We determined date of death using a combination of the medical record and the Social Security Death Index. Cause of death was determined using the medical record with an algorithm that has been previously described.30

Statistical Analysis

We determined the prevalence of 7 major comorbidities using free-text–based and EHR problem list–based comorbidity assessment. Sensitivity and specificity for identification of major comorbidities were calculated, using free-text–based assessment as the gold standard. Agreement between assessment methods was ascertained for each comorbidity using Cohen’s kappa statistic. 

We compared mean CCI scores based on the problem list and on free-text notes using a paired t test for difference in means. Continuous and categorical (0, 1, 2, and ≥3) versions of CCI scores for each assessment method were also compared using Pearson’s correlation and Cohen’s kappa, respectively. 

Multivariable competing risks regression as described by Fine and Gray31 was used to compare prediction of survival by CCI scores derived from each comorbidity assessment method. We calculated subhazard and cumulative incidence of non–prostate cancer mortality by CCI scores for each comorbidity assessment method, treating prostate cancer as a competing risk. All multivariable regression models were adjusted for age, race, VA site, clinical stage, Gleason score, PSA, and treatment type. 

We then conducted a sensitivity analysis to determine if year of diagnosis affected our results, as use of the EHR problem list may have changed over time. We subdivided our group into those diagnosed in 2001 or earlier (n = 810) and 2002 or later (n = 786) and repeated our analyses. 

We used P <.05 to denote statistical significance, and all tests were 2-sided. All statistical analyses were performed in Stata, version 12.0 (Stata, Inc; College Station, Texas). 

RESULTS

The majority of the sample (N = 1596) comprised white (44%) and African American men (37%). Approximately one-half of the sample was 65 years or younger, and most had early-stage prostate cancer (Table 1). 

EHR problem list–based comorbidity assessment had poor sensitivity but high specificity for identification of common major comorbidities (Table 2). Sensitivity values for EHR problem list–based assessment (using free-text–based assessment as the gold standard) ranged from 8% for myocardial infarction to 46% for diabetes. Specificity was above 94% for all comorbidities. Agreement between EHR problem list–based and free-text–based comorbidity assessment was poor for all major comorbidities, with kappa ranging from 0.02 to 0.44. Results did not change after subdividing our group by those diagnosed before or after 2001.

Comparison of the CCI scores based on the EHR problem list and on free text showed that EHR problem list–based assessment underestimated comorbidity burden (Table 3). Agreement across all scores was 53% (840/1596). Pearson correlation for continuous scores was 0.3, and kappa for categorical scores was 0.2. Among scores that were discordant, 82% (627/765) were higher using free-text–based compared with EHR problem list–based comorbidity assessment. Mean free-text–based and EHR problem list–based CCI scores were 1.1 (95% CI, 1.04-1.19) and 0.5 (95% CI, 0.46-0.56), respectively. Free-text–based scores were significantly higher by a mean of 0.6 points (95% CI, 0.53-0.67; P <.001). 

Competing risks regression analysis showed that free-text–based CCI scores predicted noncancer mortality, whereas EHR problem list–based scores did not. Higher free-text–based CCI scores were associated with a graduated increase in hazard of other-cause mortality (Table 4), and 10-year cumulative incidence of noncancer mortality associated with chart-based scores was 41%, 57%, 64%, and 83% for CCI scores of 0, 1, 2, and ≥3, respectively (Figure, part A). Higher EHR problem list–based CCI scores were not associated with increased mortality risk (Table 4), and EHR problem list–based CCI scores did not discriminate 10-year cumulative incidence of noncancer mortality: 55%, 64%, 62%, and 58% for CCI scores of 0, 1, 2, and ≥3, respectively (Figure, part B). Results did not change after subdividing our group by those diagnosed before or after 2001.



 
Copyright AJMC 2006-2018 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up
×

Sign In

Not a member? Sign up now!