Currently Viewing:
The American Journal of Managed Care September 2013
Referring Patients for Telephone Counseling to Promote Colorectal Cancer Screening
Roger Luckmann, MD, MPH; Mary E. Costanza, MD; Milagros Rosal, PhD; Mary Jo White, MS, MPH; and Caroline Cranos, MPH
Improving BP Control Through Electronic Communications: An Economic Evaluation
Paul A. Fishman, PhD; Andrea J. Cook, PhD; Melissa L. Anderson, MS; James D. Ralston, PhD, MPH; Sheryl L. Catz, PhD; David Carrell, PhD; James Carlson, PharmD; and Beverly B. Green, MD, MPH
Currently Reading
Risk-Stratification Methods for Identifying Patients for Care Coordination
Lindsey R. Haas, MPH; Paul Y. Takahashi, MD; Nilay D. Shah, PhD; Robert J. Stroebel, MD; Matthew E. Bernard, MD; Dawn M. Finnie, MPA; and James M. Naessens, ScD
FDA Warning and Removal of Rosiglitazone From VA National Formulary
Sherrie L. Aspinall, PharmD, MSc; Xinhua Zhao, PhD; Chester B. Good, MD, MPH; Roslyn A. Stone, PhD; Kenneth J. Smith, MD, MS; and Francesca E. Cunningham, PharmD
Emerging and Encouraging Trends in E-Prescribing Adoption Among Providers and Pharmacies
Meghan H. Gabriel, PhD; Michael F. Furukawa, PhD; and Varun Vaidya, PhD
Improving Pneumococcal and Herpes Zoster Vaccination Uptake: Expanding Pharmacist Privileges
Michael S. Taitel, PhD; Leonard E. Fensterheim, MPH; Adam E. Cannon, MPH; and Edward S. Cohen, PharmD
Testimonials Do Not Convert Patients From Brand to Generic Medication
John Beshears, PhD; James J. Choi, PhD; David Laibson, PhD; Brigitte C. Madrian, PhD; and Gwendolyn Reynolds, MTS
Outpatient Parenteral Antimicrobial Therapy at Large Veterans Administration Medical Center
Andrew Lai, MD; Thuong Tran, PharmD; Hien M. Nguyen, MD; Jacob Fleischmann, MD; David O. Beenhouwer, MD; and Christopher J. Graber, MD, MPH
Adherence, Persistence, and Switching Patterns of Dabigatran Etexilate
Kimberly Tsai, PharmD; Sara C. Erickson, PharmD; Jianing Yang, MS; Ann S. M. Harada, PhD, MPH; Brian K. Solow, MD; and Heidi C. Lew, PharmD

Risk-Stratification Methods for Identifying Patients for Care Coordination

Lindsey R. Haas, MPH; Paul Y. Takahashi, MD; Nilay D. Shah, PhD; Robert J. Stroebel, MD; Matthew E. Bernard, MD; Dawn M. Finnie, MPA; and James M. Naessens, ScD
Identifying which patients are likely to benefit from care coordination is important. We evaluated the performance of 6 risk-screening instruments in predicting healthcare utilization.
We applied logistic regression models to compare the 6 risk-stratification methods to determine which method best predicted the dichotomous outcomes in the subsequent year. Model performance was based on explanatory power and goodness of fit. Explanatory power was assessed by using the C statistic with 95% confidence intervals to predict (1) hospitalizations, ED visits, 30-day readmissions, and high-cost users and (2) the ability of each model to identify individuals with the outcomes of interest in the highest and lowest predicted deciles. The C statistic is a measure of model discrimination and is equivalent to the area under the receiver operating characteristic curve.27 To assess goodness of fit, we compared the observed and predicted hospitalizations, ED visits, readmissions, and high-cost users in the lowest and highest deciles on the basis of predicted probabilities.28 To address calibration, we further focused our assessment on those patients at the highest end of each risk score to identify which patients need care coordination. Since the hybrid model was used clinically in 2011, we used the number of patients considered for PCMH enrollment as a target number to establish the threshold for potential care coordination (based on the highest estimated probability of hospitalizations). We then determined (1) how much overlap in identification occurred in each of the 7 methods and (2) how much of the total utilization the 6 other approaches would have identified. Analyses were conducted with SAS software, version 9.1 (SAS Institute Inc, Cary, North Carolina). The study was approved by the Mayo Clinic Institutional Review Board. No external funding was obtained.

RESULTS

The study population included 83,187 patients who met inclusion criteria between January 1, 2009, and December 31, 2010. The mean age of the base population was 46.9 years, 54.6% were female, 63.1% had private insurance, and 21.8%

had Medicare and/or Medicaid coverage (Table 1). Table 1 shows the frequency distributions of the demographic characteristics at the end of 2009, as well as the percentage of paneled patients with selected chronic diseases.

Healthcare utilization and resource use for the base and prediction years were similar (Table 1). Approximately 8% of patients had a hospitalization, and 13% percent of the cohort had an ED visit. Compared with the base year 2009, the mean total cost in 2010 was nearly the same, decreasing by 1%. We saw the expected concentration of healthcare services among a relatively small number of individuals (Figure 1). Overall, 32.4% of the most expensive 10% of patients in 2009 were also in the top 10% of patients in 2010. Furthermore, our outcomes of interest were correlated, though not perfectly. High cost was associated with each of the utilization measures, with  Pearson correlations ranging from 0.22 for ED visits to 0.72 for hospitalizations. The correlations between utilization measures ranged from 0.12 between ED visits and readmission to 0.35 between hospitalization and readmission.

As shown in Table 2, the ACG model outperformed the other 5 models in predicting hospitalizations, with a C statistic range of 0.67 (CMS-HCC) to 0.73 (ACG). The ERA score and MN Tiering followed close behind ACG for prediction of hospitalization (C statistic = 0.71). In the models predicting ED visits, the C statistic ranged from 0.58 (CMS-HCC) to 0.67, with the ACG model having the best predictive ability. Further, the ACG model outperformed other models when predicting 30-day readmissions; the C statistic ranged from 0.74 (CMS-HCC) to 0.81 (ACG). When predicting healthcare expenditures for the top 10% high-cost users, the performance of the ACG model was good (C statistic = 0.76) and superior to that of other models. CMS-HCC had the lowest predictive ability for all 4 outcomes. It is important to point out that although ACGs had the best predictive ability, much of the variability in outcomes was unexplained by any model. For each outcome, models with higher C statistics also had higher rates of actual events among the highest deciles. For example, the top decile for ACG models had 27% with a hospitalization and 31% with at least 1 ED visit, whereas the top decile for CMS-HCC models had 25% with a hospitalization and 23% with an ED visit. There was more potential discrimination with ACG.

To further evaluate the 6 methods, we compared the patients in the top decile of predicted probability of having a hospitalization (Figure 2). The ERA method tended to overpredict for those in the top decile, whereas the CMSHCC method underpredicted. For the ACG, MN Tiering, and CCC methods, the actual and predicted hospitalizations were nearly equivalent. Using the model based on ACGs, 26.8% of patients in the highest decile were hospitalized; using the MN Tiering model, 25.1% in the highest decile were hospitalized; and using the Charlson Comorbidity Index, only 22.9% in the highest decile were hospitalized. Similar results were seen for the other 3 outcomes (results are available in eAppendices A, B, and C, available at www.ajmc .com).

A total of 2347 (2.8%) patients were identified as meriting care coordination based on the hybrid clinical approach (Table 3). At least 40% of these base patients were in the selected top group of patients, irrespective of which risk method was used. Interestingly, our initial clinical implementation actually identified the patients with the highest number of hospitalizations, the highest percentage of patients with any ED visit, and the patients with highest total costs.

DISCUSSION

We assessed 6 risk instrument methods based on administrative and demographic data. We evaluated the performance of the 6 models against one another to assess the ability to predict future healthcare utilization. We concluded that the ACGs produced a more accurate prediction of future healthcare utilization relative to the other models.

All risk prediction models for hospitalization had fair predictive value, with ACG having the highest overall predictive C statistic at 0.73 and the HCC model having the lowest predictive C statistic at 0.67. In a previous large study, the ACG had an excellent predictive area under the curve of 0.80.29 An earlier study on the HCC showed an area under the curve of 0.638 for predicting hospitalizations among newly enrolled Medicare patients.16 MN Tiering, CCC, and the ERA all performed similarly; thus, use of each could be justified. These novel and unique findings indicate to both providers and health plans that any of these risk-stratification models can be used for clinical purposes. The individual risk instruments performed in a similar fashion for ED visits, readmissions, and high-cost users as well as hospitalizations, as might be expected, because these outcomes tend to be correlated. The predictive values of the risk-stratification instruments were slightly lower for ED use, but higher for 30-day readmission and high-cost users. These findings provide important information regarding use of newer risk-stratification tools like MN Tiering, ERA, and CCC. The instruments predicted not only hospitalization but also re-hospitalization and ED visits.

Although all rely on the presence of diagnoses and demographic factors, the 6 risk-screening instruments vary with respect to ease of implementation. Unfortunately, the best performing methods, ACG and MN Tiering, are also the only methods we examined that require software licensing. A form of MN Tiering based on manual classification is available.30 CMS-HCC is a software package that can be downloaded from CMS. The algorithms for the ERA Index, CCC, and Charlson Comorbidity Index have all been published and are available, but need to be programmed to be applied for local clinical use.

When we compared top-scoring patients identified by our hybrid model with patients identified by other models, there was substantial overlap, resulting in similar rates of hospitalization, percentages with ED visits, and mean total costs. Of the individuals identified as high risk by the hybrid model, 41% had a hospitalization, compared with a low of 34% of individuals identified as high risk with the Charlson Comorbidity Index. Although our findings suggest that any risk-stratification model has some value in identifying high-risk individuals, ACGs and MN Tiering performed better than Charlson Comorbidity Index or CMS-HCC scores on all 4 outcomes, whereas ERA and CCC scores performed in the middle. Because the Charlson Comorbidity Index was developed to predict 1-year mortality, it might not predict utilization and cost outcomes as well as other instruments. CMS-HCC and ERA scores focus on the Medicare population and the elderly, respectively, and may perform less well with the general adult population.

Because a variety of risk instrument methods are available, our results are important. They can help guide the choice of risk instrument best suited for identifying those patients who may benefit from care coordination or other PCMH interventions. Other risk stratification methods exist, but some proprietary methods were not available for this study.

This study has clear strengths and weaknesses. It utilized an entire population of adult primary care patients who receive their care in an integrated system with hospital and ED access. With study data restricted to provider sources, there is a risk that patients may have received care outside the Mayo Clinic system. This may have lessened the study’s predictive ability; however, those identified as high risk would still be clinically relevant. The potential for this bias to systematically alter the results is small, given that most adults in Mayo Clinic ECH panels receive all of their care at the Mayo Clinic.

Our reliance on provider data precluded the use of outpatient pharmacy data. Pharmacy is an important component of total costs of care. Additionally, pharmacy data may aid predictive models based on medical claims information. However, because we focused on comparing multiple medical claims–based identification methods, this limitation likely caused no bias. Although the Mayo Clinic Rochester medical record system is robust and provides insight into all medical conditions seen by primary care and specialty care providers,31 diagnosis codes were based on billing data, allowing the possibility for miscoding or missing information. This limitation should not systematically favor 1 risk system over another. Lastly, our study is based on analysis of a single region and the population in Olmsted County is largely white and Northern European.32 This may limit the generalizability of the findings to other populations in the United States and around the world. Our results are consistent with those of other studies, but ideally they should be verified in other settings.

 
Copyright AJMC 2006-2018 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up