Among a Medicare population, use of 3 self-reported health items improves predicted inpatient admissions and healthcare costs when used with risk-prediction model.
To determine whether adding selfreported health and functional status data to a diagnostic risk-score model explains additional variance in predicting inpatient admissions and costs.
Retrospective observational analysis.
We used data from a Health Status Questionnaire (HSQ), completed by 6407 Kaiser Permanente Northwest Medicare patients between December 2006 and October 2008. We used answers from 3 items on the HSQ: (1) General Self-rated Health score, (2) needing help with 1 or more activities of daily living, and (3) having a bothersome health condition. We calculated a DxCG relative risk score from utilization information in the year prior to the survey, using electronic medical records. We compared: (1) DxCG as the sole independent variable and (2) DxCG plus the 3 items as independent variables. We estimated area under the curve (AUC) for each model. Any inpatient admission (yes/no) and being in the top 10% of costs (in the year after survey) were the dependent variables for the first and second logistic regression models, respectively.
The 3 items explained an additional 2.8% and 4.0% of variance for inpatient admissions and top 10% of costs, respectively, in addition to the variance explained by the DxCG score alone. For DxCG alone, the AUC was 0.686 (95% confidence interval [CI] 0.663-0.710) and 0.741 (95% CI 0.719- 0.764), respectively, for inpatient admissions and top 10% of costs and improved to 0.709 (95% CI 0.687-0.730) and 0.770 (95% CI 0.749-0.790) when the 3 self-reported items were added.
Using self-reported health information improved the predictive power of a DxCG model to forecast inpatient admissions and patient cost-tier.
(Am J Manag Care. 2011;17(12):e472-e478)
In a Medicare population, self-reported information about being in poorer health was associated with higher inpatient admissions and being in the top tier for costs. Items associated with these outcomes were: (1) lower score on the General Self-rated Health score item, (2) answering yes to “do you need help with 1 or more activities of daily living?” and, (3) answering yes to “do you have a bothersome health condition?” These items:
Predictive healthcare models are valuable tools for identifying individuals at risk for adverse outcomes, including high healthcare utilization and mortality. Predictive models are typically based on diagnostic history from administrative data or sometimes on patient-reported outcomes, but rarely both. The purpose of our analyses was to determine whether combining these 2 types of data in an integrated predictive model improved upon the predictive power compared with models based on administrative data alone.
Claims-based predictive models are the most common because of the ubiquitous availability of claims data. There are a variety of commercially available claims-based predictive models, and their validity is well documented.1 The Society of Actuaries has conducted periodic comparisons of the predictive power of these models for following-year costs.2 In these models, health diagnoses and prescriptions, in addition to prior expenses, are the variables with most predictive accuracy. Laboratory results have also been shown to add predictive power.3
Predictive models based on patient-reported outcomes are less common because such measures are not routinely collected and reported. However, patient-reported outcomes have been shown to have strong predictive value for forecasting costs, utilization, and mortality. In particular, the predictive value of a single General Self-rated Health (GSRH) question is well documented. DeSalvo (2005) concluded in a meta-analysis that people with “poor” self-rated health had double the mortality risk of people with “excellent” self-rated health, even after adjustment for key factors such as functional status, depression, and comorbidity.4 GSRH has also been shown to be responsive to change. In a study of older adults, self-reported health declined steeply prior to adverse events, such as death or stroke.5 A study of repeated administrations of 2 GSRH questions showed good reliability, reproducibility, and discriminate scale performance.6 Although the single-question GSRH measure is seldom used in clinical practice settings, it is widely used in medical and economic research.7 In an investigation of the causal relationship between self-reported health and subsequent outcomes, Jylha suggests that aspects of self-reported health may reflect important physiological dysregulations, such as increases in tissue inflammation.7
Similarly, DeSalvo (2009) notes that respondents provide composite assessments of their physical and mental health and social status when answering questions about their overall health, which helps explain the strong performance of GSRH in predicting future risk.8 The literature includes several examples of patientreported outcomes used in predictive models. DeSalvo (2009) compared predictive ability of single- and multi-item assessments of self-reported health to a comorbidity index and diagnostic history in predicting next year’s costs. They found that the single-item self-reported health question predicted costs as well as the multi-item assessment and the comorbidity index, but not as well as the diagnostic history model.8 Using data from a survey of Medicare members, Brody et al developed highly sensitive indices of frailty and advanced illness.9,10 The models use survey items about self-reported health and function, such as poor general health and assistance needed for medications, bathing, or dressing, and were shown to be superior to clinical judgment in predicting frailty and advanced illness.
There are, however, few examples of predictive models that combine diagnostic history and patient-reported outcomes. Fleishman et al found that measures of health status improved prediction of subsequent medical expenditures in a model that included demographics, chronic conditions, and previous expenditures in a nationally representative sample.11 Fleishman and Cohen later found that measures of self-rated health and function improved prediction of high expenditures beyond more detailed information on medical conditions for a nationally representative sample.12 These results are consistent with previous studies based on nonrepresentative samples. In a study of Veterans Affairs (VA) beneficiaries, self-reported physical and mental health improved mortality prediction when combined with a diagnostic history model.13 Similarly in a previous study of VA beneficiaries, a multidomain health assessment added predictive power to a diagnostic history model.14
In this study, we examined whether inclusion of answers to a few brief self-reported items improved the prediction of future inpatient admissions and medical care costs compared with using an administrative risk score alone to predict the same outcomes. This study is unique in that it is the first of which we are aware to conduct this analysis in a Medicare population. Further, the Medicare population represents a unique population, given the large proportion with chronic health conditions, which contributes to increased inpatient admissions and costs—as the US population ages. We selected the well-validated, single-item general health question, a question about activities of daily living,15 and a question indicating whether or not the person currently has a physical condition that bothers him or her. These 3 items were selected because they are questions that could be quickly administered during an office visit, over the phone, by mail, or the Internet and are not tailored to a specific population (eg, those with chronic conditions) and capture information not readily attainable through administrative data.
Study subjects were members of Kaiser Permanente Northwest (KPNW) and were aged 65 years and older. To be included in the study, members needed to complete a Health Status Questionnaire (HSQ) between December 2006 and October 2008, and have 1 year of membership eligibility prior to and after the completion of the HSQ. This time frame allowed us to compute the DxCG scores. When the member had completed more than 1 HSQ in the time frame of interest, 1 HSQ was selected at random for inclusion in the study.
During the period of study, the HSQ was mailed to new Medicare members aged 65 years and older. This population included those newly enrolled to KPNW and current members who recently became Medicare eligible. In addition, it was mailed annually to a Medicare expanded-care population. The general response rate for the HSQ is 76%.
The HSQ is a 45-item survey that includes measures of physical function, activities of daily living, health conditions, self-care deficits, and general health. For this study we used data from 3 items. The self-reported general health measure is scored in a range of 1 (excellent) to 4 (poor) by answering the question “compared to other persons your age, would you say your health is…?” The second item asked “because of a disability or health problem, do you need or receive help from another person for any of the following everyday activities…?” The activities listed include preparing meals, shopping for groceries, doing routine household chores, managing money, doing laundry, taking medications, getting to places not within walking distance, and using the telephone. This item was scored 0 if no items were checked and if 1 or more items were checked. The third item was “Is there any physical condition, illness, or health problem that bothers you now?” and was scored 0 for “no” and 1 for “yes.”
The DxCG relative risk score16 is based on an individual’s total predicted cost in the next year relative to the population mean and is used to assign individuals to a Diagnostic Cost Group (DCG). DxCG Risk Solutions software16 is used to calculate the DxCG relative risk score. DCG cost estimates are calculated using 1 year of inpatient and outpatient International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes to classify patients into groups with similar cost patterns. The diagnoses are classified into mutually exclusive DCGs, based on predicting future inpatient and outpatient costs. The software algorithm for DCG classifications was developed using 3 different populations: a privately insured population, a Medicaid population, and a Medicare population. Diagnostic Cost Groups for this study were based on the Medicare-derived DCG grouping. The range of Medicare DxCG scores ranged from 0.04 (minimum) to 15.59 (maximum). Individuals with a score of 1.0 have the relative risk equivalent to the national Medicare reference population. This reference population includes any of the following (excluding those with end-stage renal disease): (1) individuals 65 years or older, (2) disabled (under age 65 years) Medicare eligible, or (3) Medicare/Medicaid dual eligible. Previous research has used the DxCG relative risk score to predict future costs and mortality estimates.12,17-19
In the year following the date of the HSQ, we used electronic medical records to determine if the subject had any inpatient hospital admissions. Actual health plan costs for each person for this same period were extracted from administrative data and presented in 2008 dollars.
Prior to logistic regression analysis, we tested for collinearity among the self-reported health items and DxCG risk score by calculating a Pearson product-moment correlation coefficient. We used 2 sets of logistic regressions, with 2 models in each set: DxCG as the sole independent variable and DxCG plus the 3 self-reported items as the independent variables. We calculated sensitivity, specificity, receiver operating characteristic (ROC) curves, and the area under the curve (AUC) for each model. Any inpatient admission (yes/no) was the dependent variable for the first set of regression models and being in the top 10% of costs (based on the observed study population) was the dependent variable for the second set of regression models. Prior to conducting the main analyses, those with and without complete data were compared using t tests and χ2 tests.
A total of 7545 HSQs were returned between December 2006 and October 2008 from 7496 unique individuals. Of those, 6407 people met the inclusion criteria of 1 year of eligibility prior to and after the HSQ date. Of those, 5815 (90.8%) had complete data on the 3 self-report items of interest and the DxCG. Those with complete data were younger (mean = 68.55 vs 70.71, P <.001), more likely to be male (44.9% vs 39.6%, P = .016), more likely to have attended college (67.0% vs 59.7%, P <.001), more likely to be married (69.0% vs 59.5%, P <.001), and to be white (93.1% vs 88.0%, P <.001). There were no significant differences between those with and without complete data on the likelihood of being in the top 10% of costs (P = .680) or of having an inpatient admission (P = .618).
Description of the Sample
In the average age was 68.55 (standard deviation [SD] = 6.34) with a range of 65 to 97. The majority of the sample was female (55.3%), married (69.0%), white (93.1%), and had attended some college (67.0%). Only 1.0% of the sample reported their race as black or African American; 1.9% as Asian or Pacific Islander; 0.3% as Aleut, Eskimo, or American Indian; and 1.3% as other. When asked to rate their health compared with other persons of their age, 27.8% rated it as excellent, 55.4% as good, 14.9% as fair, and 2.0% as poor. Only 10.9% needed help with 1 or more instrumental daily activities and 40.5% had a physical condition, illness, or health problem that bothered them at the time of taking the survey. The mean DxCG relative risk score was 0.68 (SD = 0.75) with a range of 0.19 to 9.91. Slightly more than 10% (10.8%) had 1 or more inpatient stays during the year following the survey.
Comparison of DxCG-only Model and DxCG—plus–3–self-report–items Model
We used logistic regression to test whether addition of 3 self-report items significantly improved the DxCG model’s accuracy in predicting inpatient admissions and being in the top 10% of costs. The correlations among the self-report items and DxCG ranged from 0.26 to 0.45 (results not shown), which did not indicate a high degree of multicollinearity. For predicting inpatient admissions, the 3 self-report items made a significant contribution to the model compared with using DxCG alone (χ2 = 85.11, degrees of freedom [df] = 3, P <.001), accounting for an additional 2.8% of the variance in having 1 or more inpatient admissions. summarizes results of the model. Higher DxCG scores, poorer self-reported health, needing help with at least 1 everyday activity, and having a physical condition that was bothersome at the time of completing the survey were all associated with having an inpatient admission. For each additional point on the DxCG, the likelihood of an inpatient admission increased by 63%. A change of 1 point on self-reported health (eg, from good to fair) increased the likelihood of an inpatient admission by 30%. Needing help with at least 1 instrumental daily activity and having a physical condition that bothered the respondent at the time of the survey increased the likelihood of an inpatient admission by 52% and 53%, respectively. presents the ROC curves for the model that used DxCG alone, and for the model that used DxCG plus the 3 self-report variables. The AUC was 0.686 (P <.001, 95% confidence interval [CI] 0.663-0.710) for the model with DxCG alone, and improved to 0.709 (P <.001, 95% CI 0.687-0.730) when the 3 self-report items were added. provides results for sensitivity, specificity, positive predictive value, and negative predictive value for both models. Sensitivity was 0.467 for the DxCG-only model and improved to 0.529 when the 3 self-report items were added. Specificity was 0.807 for the DxCGonly model and 0.741 when the 3 self-report items were added to the model.
Similarly, analysis for the models’ ability to predict being in the top 10% of costs showed that the 3 self-report items made a significant contribution to the model compared with the model that used DxCG alone (χ2 = 120.64, df = 3, P <.001). The 3 self-report items accounted for an additional 4.0% of the variance explained in being in the top 10% cost group (Table 3). Table 2 summarizes the results of the full logistic model with DxCG and the 3 self-report items. Higher DxCG scores, poorer self-reported health, needing help with at least 1 everyday activity, and having a physical condition that bothered the participant at the time of completing the survey were all associated with having an inpatient admission. For each additional point on the DxCG, the likelihood of being in the top 10% of costs increased by 104%. A change of 1 point on the self-reported health score (eg, good to fair) increased the likelihood of being in the top 10% of costs by 30%. Needing help with at least 1 instrumental daily activity and having a bothersome physical condition increased the likelihood of being in the top 10% of costs by 78% and 82%, respectively. presents the ROC curves for the model using DxCG alone and for the model with DxCG plus the 3 self-report variables. The AUC was 0.741 (P <.001, 95% CI 0.719-0.764) for the model that used DxCG alone and improved to 0.770 (P <.001, 95% CI 0.749-0.790) when the 3 self-report items were added to the model. Sensitivity was 0.522 for the DxCG-only model and improved to 0.611 when the 3 self-report items were added to the model. Specificity was 0.821 for the DxCG-only model and 0.771 when the 3 self-report items were added to the model (see Table 3).
This study found that self-reported items add predictive power for future healthcare costs and inpatient admissions beyond the DxCG relative risk score. Specifically, poorer self-reported health, needing help with at least 1 activity of daily living and having a bothersome physical condition added predictive power for higher costs and inpatient admissions in the 12 months after survey administration. This study is one of only a few studies to examine the additive effect of self-reported health items to predict subsequent costs and inpatient admissions over and above the DxCG relative risk measure for a Medicare population.
Results from this study confirm findings from other studies. Several studies have compared self-reported health items with administrative data—based risk measures to predict future costs, but did not examine the additive effect of self-reported items over administrative data–based risk measures. Using data from the Medical Expenditure Panel Survey, DeSalvo et al found that a 1-item GSRH score was predictive of future healthcare costs, but less robust than models that use administrative data–based risk measures.8 Similarly, using administrative and survey data from a VA population, Maciejewski et al found the SF-36 Physical Component Survey and Mental Component Survey were predictive of future healthcare costs, but again less robust than administrative data—based risk measures.19 In addition, similar to our findings, Fleishman et al found that measures of health status improved prediction of subsequent medical expenditures beyond demographics, chronic conditions, and previous expenditures.11
We also found that the DxCG risk score and observed self-report health items were not highly correlated with each other. Similar to our findings, Wang et al found that DCG (similar to DxCG) and SF-36 physical and mental health scores were both predictive of future healthcare costs, but not highly correlated.20 The implication of these results is that DxCG and the individual self-report health items are measuring different underlying constructs and do not overlap greatly.
Findings from our study have implications for care delivery and quality improvement initiatives. Asking questions about self-rated health status, activities of daily living, and bothersome physical conditions is easy to do and can be administered in a variety of care delivery settings. The strong predictive ability of these self-report questions are worthy of integration into enhanced care management and patient outreach efforts. This information is especially important when diagnostic history is not available, or is too expensive to collect.
This study is not without limitations. First, the DxCG relative risk measure was the only administrative data—based risk measure analyzed in this study. This limitation is addressed in part in that previous research found the DxCG relative risk measure has greater predictive ability than other administrative data–based measures.19 Second, the study population was enrolled in an integrated health maintenance organization health delivery system, so study findings cannot be generalized to non—managed care settings. Third, the study population included few racial/ethnic minorities, again limiting the generalizability of the study findings. Fourth, we were not able to compare the characteristics of those who did not return an HSQ with those who did return the HSQ. Fifth, patient-level cost data include information where both utilization and matching general ledger information are available. Therefore, costs for services with no, or inadequate, available utilization information are not included, so costs may be understated for some patients. Last, the DxCG predictive model did not include laboratory and biometric data, as well as information on receipt of evidence-based care based on clinical condition, which are important factors that can improve predictive modeling.
Future research is needed to evaluate the use of self-reported health items to enhance care management activities. Specifically, randomized studies are needed to compare enhanced delivery models using self-reported health items to usual care. Within this approach, more formal cost-effectiveness analysis is needed to identify potential reductions in hospital admissions through rigorous case finding and enhanced care management approaches. The primary objective would be to determine the extent to which the addition of self-reported health information enhances the ability of healthcare organizations to better manage overall costs and utilization over time.
Author Affiliations: From Center for Health Research (NAP, DMM, AB, ES), Kaiser Permanente Northwest, Portland OR; Department of Care and Service Quality (MS), Kaiser Permanente, Oakland, CA; Decision Support Services (EMD), Kaiser Permanente, Oakland, CA.
Funding Source: Kaiser Permanente.
Author Disclosures: The authors (NAP, MS, DMM, AB, ES, EMD) report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (NAP, MS, DMM, AB); acquisition of data (DMM, AB, ES, EMD); analysis and interpretation of data (NAP, MS, DMM, ES, EMD); drafting of the manuscript (NAP, MS, DMM); critical revision of the manuscript for important intellectual content (MS, DMM, AB, EMD); statistical analysis (NAP); obtaining funding (MS, AB); and administrative, technical, or logistic support (DMM, AB, ES).
Address correspondence to: Nancy A. Perrin, PhD, 3800 N Interstate Ave, Portland, OR 97227-1110. E-mail: Nancy.Perrin@kpchr.org.
1. Meenan RT, Goodman MJ, Fishman PA, Hornbrook MC, O’Keeffe-Rosetti MC, Bachman DJ. Using risk-adjustment models to identify high-cost risks. Med Care. 2003;41(11):1301-1312.
2. Winkelman R, Mehmud S. A comparative analysis of claims-based tools for health risk assessment. Society of Actuaries. http://www.soa.org/files/pdf/risk assessmentc.pdf. Published 2007.
3. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232-239.
4. DeSalvo KB, Bloser N, Reynolds K, He J, Muntner P. Mortality prediction with a single general self-rated health question: a meta-analysis. J Gen Intern Med. 2005;20:267-275.
5. Diehr P, Williamson J, Patrick DL, Bild DE, Burke GL. Patterns of self-rated health in older adults before and after sentinel health events. J Am Geriatr Soc. 2001;49(1):36-44.
6. DeSalvo KB, Fisher WP, Tran K, Bloser N, Merrill W, Peabody J. Assessing measurement properties of two single-item general health measures. Qual Life Res. 2006;15(2):191-201.
7. Jylha M. What is self-rated health and why does it predict mortality? Towards a unified conceptual model. Soc Sci Med. 2009;69(3):307-316.
8. DeSalvo KB, Jones TM, Peabody J, et al. Health care expenditure prediction with a single item, self-rated health measure. Med Care. 2009; 47(4):440-447.
9. Brody KK, Johnson RE, Ried LD, Carder PC, Perrin N. A comparison of two methods for identifying frail Medicare-aged persons. J Am Geriatr Soc. 2002;50(3):562-569.
10. Brody KK, Perrin NA, Dellapenna R. Advanced illness index: predictive modeling to stratify elders using self-report data. J Palliat Med. 2006;9(6):1310-1319.
11. Fleishman JA, Cohen JW, Manning WG, Kosinski M. Using the SF-12 health status measure to improve predictions of medical expenditures. Med Care. 2006;44(suppl):I54-I63.
12. Fleishman JA, Cohen JW. Using information on clinical conditions to predict high-cost patients. Health Serv Res. 2010;45(2):532-552.
13. Pietz K, Petersen LA. Comparing self-reported health status and diagnosis-based risk adjustment to predict 1- and 2 to 5-year mortality. Health Serv Res. 2007;42(2):629-643.
14. Pietz K, Ashton CM, McDonell M, Wray NP. Predicting healthcarecosts in a population of Veterans Affairs beneficiaries using diagnosis-based risk adjustment and self-reported health status. Med Care. 2004; 42(10):1027-1035.
15. Katz S, Branch LG, Branson MH, Papsidero JA, Beck JC, Greer DS. Active life expectancy. N Engl J Med. 1983;309(20):1218-1224.
16. DxCG I. Guide to the Diagnostic Cost Groups (DCGs) and DCG Software, Release 4. 1999. Waltham, MA: DxCG Inc.
17. Ash AS, Ellis RP, Pope GC, et al. Using diagnoses to describe populations and predict costs. Health Care Financ Rev. 2000;21(3):7-28.
18. Ash AS, Posner MA, Speckman J, Franco S, Yacht AC, Bramwell L. Using claims data to examine mortality trends following hospitalization for heart attack in Medicare. Health Serv Res. 2003;38(5): 1253-1262.
19. Maciejewski ML, Liu C, Derleth A, McDonell M, Anderson S, Fihn SD. The performance of administrative and self-reported measures for risk adjustment of Veterans Affairs expenditures. Health Serv Res. 2005;40(3):887-904.
20. Wang ML, Rosen AK, Kazis L, Loveland S, Anderson J, Berlowitz D. Correlation of risk adjustment measures based on diagnoses and patient self-reported health status. Health Serv Outcomes Res Methodology. 2000;1:3-4:351-365.