• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Using Electronic Health Records and Claims Data to Identify High-risk Patients Likely to Benefit From Palliative Care

Publication
Article
The American Journal of Managed CareJanuary 2021
Volume 27
Issue 1

Deep learning algorithms could improve palliative care by predicting mortality from electronic health records and claims data.

ABSTRACT

Objectives: Palliative care has been demonstrated to have positive effects for patients, families, health care providers, and health systems. Early identification of patients who are likely to benefit from palliative care would increase opportunities to provide these services to those most in need. This study predicted all-cause mortality of patients as a surrogate for patients who could benefit from palliative care.

Study Design: Claims and electronic health record (EHR) data for 59,639 patients from a large integrated health care system were utilized.

Methods: A deep learning algorithm—a long short-term memory (LSTM) model—was compared with other machine learning models: deep neural networks, random forest, and logistic regression. We conducted prediction analyses using combined claims data and EHR data, only claims data, and only EHR data, respectively. In each case, the data were randomly split into training (80%), validation (10%), and testing (10%) data sets. The models with different hyperparameters were trained using the training data, and the model with the best performance on the validation data was selected as the final model. The testing data were used to provide an unbiased performance evaluation of the final model.

Results: In all modeling scenarios, LSTM models outperformed the other 3 models, and using combined claims and EHR data yielded the best performance.

Conclusions: LSTM models can effectively predict mortality by using a combination of EHR data and administrative claims data. The model could be used as a promising clinical tool to aid clinicians in early identification of appropriate patients for palliative care consultations.

Am J Manag Care. 2021;27(1):e7-e15. https://doi.org/10.37765/ajmc.2021.88578

_____

Takeaway Points

We conclude that the long short-term memory model can be used as a promising clinical tool to aid clinicians in early identification of patients who are appropriate for palliative care consultations.

  • It can update predicted probability of mortality daily for each patient.
  • It does not require expert knowledge or hand selection to design features.
  • The predicted mortality probabilities may assist physicians in identifying patients for palliative care consultations.

_____

Although most seriously ill patients wish to avoid aggressive and burdensome care, many receive invasive procedures and therapies in the final months of their lives with limited benefit.1-3 Such aggressive care near the end of life is associated with reduced quality of life for patients, excessive end-of-life medical spending, and a high proportion of patients dying in the hospital or other medical facility rather than at home.4,5

Palliative care is a holistic approach to care administered at any point in a serious illness that offers patients support with intensive symptom management and complex medical decision-making, whereas hospice care utilizes a palliative interdisciplinary approach for patients with a prognosis of having less than 6 months to live; these patients typically desire less aggressive care.6 Palliative care is associated with improved patient quality of life,7,8 reduced hospital readmissions,9 decreased hospital length of stay,10 and reduced total cost of care.11,12 Despite multiple studies demonstrating that the majority of patients in the last of year of life experience a high and escalating symptom burden and considerable need for support with advanced care planning,13-16 providers and health care systems have struggled to proactively identify and support these high-risk patients and their families.17-19

Improving the use of palliative care has been identified as a key driver of success for value-based reimbursement models such as accountable care organizations (ACOs), Medicare Advantage plans, and capitated insurance plans.20-24 One study of managed care patients provided with home palliative care support found a mean savings of $3908 per member per month in the last 3 months of life compared with propensity-matched controls, with hospice participation increasing from 25% to 70% and median hospice length of stay increasing from 9 to 34 days.25 Currently, the majority of patients who would benefit from palliative care services are either referred late in their disease course or not at all,26,27 so there has been intense interest in developing and implementing prognostic tools to proactively identify patients near the end of life who may benefit from palliative care services.28,29

Prognostic models are commonly used in clinical practice to identify high-risk populations who may or may not benefit from specific interventions.30-37 However, a recent study38 highlighted the limitations of such models, including the use of small data sets for derivation, limitations in number and complexity of variables considered, oversimplified models, and constraints on the population included in the models. To overcome these limitations, the authors applied a deep learning model on electronic health records (EHRs) for a large number of patients.38 However, a remaining limitation of the study was that it employed only EHR data, which may be incomplete for patients who receive care in multiple medical systems, as most patients do.

In this paper, we integrated administrative claims and EHR data into our models. Claims data provide a richer picture for each patient, although the information can lag by 3 to 6 months. We hypothesized that the integration of claims and EHR data would provide more complete and accurate data than either source alone. We searched the PubMed and Google Scholar databases for literature prior to August 2020. To our best knowledge, no studies have compared EHR and claims data for predicting mortality for patients in the United States.

We implemented a deep learning method to predict mortality of inpatients and outpatients as a proxy for identifying patients to be considered for palliative care. Our long-term goal is to provide a clinical referral system for daily use to assist physicians in identifying patients who could benefit from palliative care.

METHODS

Study Purpose

We hypothesized that combining claims data with EHR data could achieve better performance for mortality prediction than using EHR or claims data alone. To test this hypothesis, we applied and compared mortality prediction performance across 4 types of machine learning and deep learning models—long short-term memory (LSTM), deep neural networks (DNN), random forest (RF), and logistic regression (LR)—using EHR data, claims data, and a combination of EHR and claims data.

Data Source

In this study, we used EHR data and administrative claims data from the Medicare and Medicare Advantage ACOs of a large integrated health care system. The system operates in 2 states and uses EpicCare EHR software. We included all 59,639 patients with encounter records from January 2017 to February 2020.

Study Design

A patient was considered a positive case if the patient had a recorded date of death. All other patients were then considered negative cases. We first included 59,639 total unique patients based on the patient having at least 1 EHR record or claim record (“claims + EHR”). Among these 59,639 patients, when we extracted only the claims data of patients, there were 56,598 unique patients (“claims only”). When we extracted only the EHR data of patients, there were 55,316 unique patients (“EHR only”).

For the above 3 cases, we followed a similar process of feature extraction, data splits, model training, and evaluation as described below.

Feature Extraction

For each patient, we extracted all available encounter records. The features included diagnosis codes, procedure codes, medication codes, problem lists, demographics, and social history. For the social history, we combined the questions and answers as a column of “codes.” For example, if a patient had a question of “drug use” and the corresponding answer is “no,” then we combined the question and answer as “DrugUseNo.” There were 84,315 unique codes included in the analyses.

To better capture the pattern and nature of our longitudinal data, all codes were sorted in a time-increasing order and then mapped to 32-dimensional vector space by the technique of Word2Vec.39 The Python Genism Word2Vec model was used with the following hyperparameters: size (embedding dimension) was 32, window (the maximum distance between a target word and all words around it) was 5, min_count (the minimum number of words counted when training the model) was 1, and sg (the training algorithm) was CBOW, or the continuous bag of words. Each feature was then represented by a 32-dimensional numerical vector and also associated with a time point (age at event), which was calculated by the difference in years between the corresponding visit date and the birth date of the patient. We also included race as a feature in our study. Ultimately, each individual patient had their own numerical vector to represent their codes.

The number of events for different patients may vary. We selected 500 events for all patients; this empirical choice was based on proximity to the mean number (mean = 466 for case of claims + EHR) for all patients. These 500 events were sorted in a time order. If a patient had more than 500 events, only their most recent 500 were included. If fewer than 500 were available, zero events padding to 500 was utilized.

The input of LSTM had a dimension of (n, 500, 34), where n was the number of samples/patients, 500 was the number of codes included, and 34 was the dimensionality of features (32-dimensional vector by Word2Vec, plus age and race). The input of DNN, RF, and LR had a dimension of (n, 500 × 34).

Data Splits for Training, Validation, and Testing

We included 59,639 patients for claims + EHR, 56,598 patients for claims only, and 55,316 patients for EHR only analyses.

The selected patients were randomly split into training (80%), validation (10%), and testing (10%) data sets. The number of patients in each data set is shown in eAppendix Table 1 (eAppendix available at ajmc.com).

The models were trained on the training data set and then validated on the validation data set. The model with the best performance on the validation data set was selected as the final model to be tested in the testing data set.

The Training of Machine Learning and Deep Learning Models

The embedded vectors were the inputs of machine learning and deep learning models. Our model is an LSTM neural network, which is composed of an input layer, 1 hidden layer (128 dimensions), and a scalar output layer. A binary cross-entropy loss function was employed as the output layer and a Sigmoid function was used as the activation function for the hidden layer. The Adam optimizer40 was used to optimize the model with a minibatch size of 256 samples. We did an extensive hyperparameter search for activation functions (Sigmoid, tanh, SeLU, and ReLU), as well as the embedding dimensions (32, 64). We did not extensively search other hyperparameters, such as number of LSTM layers, number of recurrent units, and batch size, as these hyperparameters were of minor importance.41

We also compared the LSTM with other models: DNN,42 RF,43 and LR.44 The DNN model was composed of an input layer, 4 hidden layers (with 256, 128, 64, and 32 dimensions, respectively), and a scalar output layer. We used the Sigmoid function45 as the output layer and ReLu function46 at each hidden layer. Binary cross-entropy was used as loss function, and the Adam optimizer was used to optimize the models with a minibatch size of 256 samples. The RF and LR models were configured by the default options in the Scikit-learn package in Python 3.

Model Evaluation

We implemented a variety of metrics to evaluate the model performance in order to identify the best models. The metrics included area under the receiver operator characteristic (ROC) curves (AUCs), precision recall curves, overall accuracy, sensitivity, specificity, precision, and F1-score.

We also tested the model under different discrimination thresholds or cutoffs. The values of cutoffs used were 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 0.99.

Separate Model Testing for Inpatients and Outpatients

In the study design, we included all inpatients and outpatients. Inpatients were defined as patients who had at least 1 hospital admission. Patients without hospital admissions were defined as outpatients. We investigated inpatients and outpatients separately in the testing data set by our trained LSTM models for claims + EHR analyses.

In the testing data set, 3143 patients had an admission and 2821 patients had only outpatient visits recorded. Among these, 509 inpatients and 107 outpatients were deceased and 2634 inpatients and 2714 outpatients were alive at the end of follow-up.

Figure 1 is the flowchart of the overall study. Analyses were conducted in 2020 using the libraries of Scikit-learn, Keras, Scipy, and Matplotlib with Python version 3.6.5.

RESULTS

The study found 4 main results. In this section, we list an overview of the main results. First, we showed the characteristics of the overall study population (Table) and observed that the majority of patients were female and White, with a mean age of 74 years. Second, we evaluated model performance by ROC curves, precision recall curves, and other performance metrics (Figure 2 and eAppendix Table 2). In all modeling scenarios, LSTM models outperformed the other 3 models, and using claims + EHR data yielded the best performance. Third, we showed the distribution of predicted probabilities for the case of claims + EHR by LSTM model to intuitively understand prediction performance results and potential thresholds for a desired accuracy (Figure 3 and eAppendix Figure). The distribution of predicted probabilities of LSTM on patients in the testing data set showed a high true-negative rate (specificity) and a low false-positive rate. Finally, we showed the LSTM model performance on inpatients and outpatients to understand the difference in performance among different populations (Figure 4 and eAppendix Table 3). The trained LSTM models performed better on the inpatient sample compared with the outpatient sample (AUC, 0.97 vs 0.92; average precision [AP], 0.92 vs 0.72).

Characteristics of the Overall Study Population

The Table details the characteristics of our overall study population. Approximately 10% of patients died over the course of follow-up. The mean age of patients who died was 78 years, and the mean age was 73 years for those who were alive at the end of follow-up. The majority of the patients were female and White. Approximately 76% patients had a diagnosis of essential (primary) hypertension, and deceased patients had higher proportion (84%) of this diagnosis compared with the patients who were alive (75%).

Model Performance by AUC and Precision Recall Curves

The LSTM model outperformed the other 3 models in terms of AUC values and precision recall curves. All the models achieved the best performance by using a combination of EHR and claims data compared with using each data type alone.

Figure 2 shows the model performance by AUC and precision recall curves. The AUC for the LSTM model was 0.97 (0.95 for DNN, 0.92 for RF, and 0.88 for LR) for claims + EHR, 0.96 (0.93, 0.90, and 0.88, respectively) for claims only, and 0.94 (0.90, 0.84, and 0.77) for EHR only.

Meanwhile, the LSTM model achieved an AP score of 0.89 (0.82 for DNN, 0.71 for RF, and 0.60 for LR) for claims + EHR, 0.87 (0.76, 0.66, and 0.60, respectively) for claims only, and 0.77 (0.64, 0.47, and 0.37) for EHR only.

Both the AUC values and precision recall curves indicated that LSTM outperformed the other 3 models. The curves also demonstrated that LSTM models had a strong early recall behavior.

Model Performance by Other Metrics

LSTM models outperformed the other 3 models using additional performance metrics, and using claims + EHR data yielded the best performance.

eAppendix Table 2 lists other metrics we used to evaluate model performance. The metrics include overall accuracy, sensitivity, specificity, precision, and F1-score.

In all cases, LSTM demonstrated the best performance. For claims + EHR, it achieved an accuracy of 0.97 (vs 0.96 for claims only and 0.95 for EHR only), a sensitivity of 0.77 (vs 0.72 and 0.60, respectively), a specificity of 0.99 (vs 0.99 and 0.98), a precision of 0.91 (vs 0.91 and 0.79), and an F1-score of 0.83 (vs 0.81 and 0.68).

Distribution of Predicted Probabilities

The LSTM model had high sensitivity and specificity. The eAppendix Figure shows the distribution of predicted probabilities for claims + EHR by the LSTM model on the testing data set. A majority of deceased patients were predicted by the model to have high probabilities of mortality, and a small proportion of them had lower predicted probabilities of mortality, which means the model had a high true-positive rate (sensitivity) and a low false-negative rate. Of the patients who were alive at the end of follow-up, only a few were predicted to have a high probability of mortality and most of them had very low predicted probabilities of mortality, which means the model had a high true-negative rate (specificity) and a low false-positive rate.

Figure 3 shows the model performance of LSTM under different cutoffs of predicted probabilities. When using lower cutoffs (eg, 0.1), the model had a higher sensitivity (88%) with lower precision (58%) and F1-score (70%). When using higher cutoffs, the sensitivity of the model decreased and specificity increased. The F1-score first increased and then decreased as the cutoffs became higher.

Performance on Separate Inpatient and Outpatient Cohorts by the Trained LSTM Model

The LSTM models achieved better performance on inpatient compared with outpatient samples.We also tested the trained LSTM model on the inpatients and outpatients separately. The performance results are summarized in eAppendix Table 3 and Figure 4.

The trained LSTM models performed better on inpatients compared with outpatients. The AUC value was 0.97 for inpatients (0.92 for outpatients), and the AP score was 0.92 for inpatients (0.72 for outpatients). The other metrics followed a similar trend.

DISCUSSION

In this study, we utilized EHR data and administrative claims data from a large health system to predict mortality of patients to better identify patients who might benefit from palliative care. We investigated 3 data sets, using both EHR data and claims data, only claims data, and only EHR data. We compared LSTM models with DNN, RF, and LR models.

Our results indicated that deep learning LSTM models can most effectively predict the mortality of patients compared with other models using all data sets. One possible reason for observing better performance of the LSTM model was that it can learn from experience based on the time-series data when the time steps are of arbitrary size.47 LSTM models can also handle vanishing gradient problems by retaining information (via memory unit) for an arbitrary time period.48 The memory unit can determine which information needs to be retained, forgotten, and output to the next computational unit.49

In addition, our framework for using a LSTM model for mortality prediction has additional advantages, as it (1) is suitable for multiple data format sources, (2) does not require expert knowledge or hand selection to design features, (3) can dynamically update predicted mortality probabilities based on newly added features of patients, and (4) is a more general framework that can be used in both outpatient and inpatient systems. Understanding the relative performance of these models for identifying high-risk patients may help health care systems and managed care organizations select the optimal model for their organization.

Our results also indicated that models achieved the best performance by using both EHR and claims data, the next best by using claims data only, and the worst by using only EHR data. This was not surprising, as the combined EHR data and claims data provided more information for the models to discover and identify more potential patterns, which may contribute to the accurate predictions. One possible reason for only claims data performing better than only EHR data is that claims data usually contain more complete information for patients than EHR data; most patients receive care in multiple medical systems, which can result in incomplete EHR data. A study by Zeltzer et al of 118,510 Israeli patients found that over longer time durations, the relative benefit of claims data appeared to increase compared with EHR data, which were associated with greater prognostic accuracy in short-term outcomes.49

There is a consensus that effective use of palliative care is critical to the success of value-based care organizations by improving quality and reducing costs for patients near the end of life, where spending is concentrated. One of the key barriers to widespread adoption of palliative care to date has been correctly identifying which patients to target. Smith et al report that “[s]ystematic identification of patients for whom specialist and/or primary palliative care interventions are likely to improve quality of life and reduce cost is essential in setting up a successful population management program. Unfortunately, widely accepted criteria for identifying patients who would most benefit from palliative care have not been developed.”50 Current approaches to identifying patients for palliative care often focus on individual disease states (eg, congestive heart failure, chronic obstructive pulmonary disease) or historical utilization (eg, hospitalizations in the past 12 months), which do not identify all patients at risk and often identify patients too late in the disease course. Typically, these programs focus on either inpatients or outpatients but do not include both.

The advances in this article demonstrate a more accurate, holistic approach to identifying patients at high risk of mortality using all available sources of patient information (claims and EHR) for both inpatients and outpatients. If this approach is validated in additional populations, it can be used to test the impact of palliative care for patients in different phases of care on quality, spending, and utilization. This would allow value-based care organizations to more accurately measure the added value of palliative care for various populations of patients.

To better understand the differing performance for inpatients and outpatients, we investigated the number of records (source of knowledge) for each group. The inpatients had more information, with a mean number of 848 records, compared with the outpatients, with a mean number of 136 records. Thus, we hypothesize that the larger amount of information provided by inpatients to the LSTM model enabled better prediction of mortality compared with outpatients. Moreover, we tested the LSTM model for the case of claims + EHR in 4 patient groups with different numbers of records: fewer than 150 (n = 1680 patients), between 150 and 300 (n = 1309 patients), between 301 and 500 (n = 1104 patients), and more than 500 (n = 1871 patients). We observed that the number of records had an impact on model prediction results. Specifically, prediction results became less accurate for patients with fewer records. Future studies will also investigate the model performance on other patient groups, including different disease groups.

Limitations

Our study has several limitations. First, our models were trained based on data from 1 site, and additional research is needed to determine their scalability to other health care settings. Second, we included only a subset of the population (Medicare/Medicare Advantage) who is older and has more comorbidities than the general population, so prediction for other age groups with fewer comorbidities may not be as accurate.51-53 Unknown model “fairness” with regard to protected attributes might be caused by machine/deep learning models.54,55 Third, we predicted the overall status of death rather than the timing of death from a clinical event, although more than 96% of deaths occurred over a 1-year follow-up. Future studies will use similar methods to predict time to death, in-hospital mortality, and 30-day mortality at admission, which may help clinicians decide on the timing of palliative care interventions. These models used comprehensive data sets that were assembled after the last follow-up for these patients. Using these models prospectively may be challenged by our ability to assemble, analyze, and interpret them in real time. Future studies will focus on the implementation of this proof-of-concept study.

CONCLUSIONS

We demonstrate that deep learning LSTM models can effectively predict mortality by using a combination of EHR data and administrative claims data. The promising predictions of all-cause mortality could assist physicians in suggesting and providing palliative care consultations to appropriate patients.

Acknowledgments

The authors would like thank Karen Shakiba, Jacob Heidbrink, Ly Mettlach, and Ryan Soluade at BJC Medical Group for their research assistance.

Author Affiliations: Institute for Informatics (AG, RF) and Department of Internal Medicine (RF, PW), Washington University School of Medicine, St Louis, MO; University of Pennsylvania Health System (CC), Philadelphia, PA; Division of Pulmonary, Allergy, and Critical Care, and Palliative and Advanced Illness Research Center, Perelman School of Medicine at the University of Pennsylvania (KC), Philadelphia, PA; BJC Medical Group and BJC Accountable Care Organization (NM), St Louis, MO.

Source of Funding: Work included in this document was produced by the research team. This work was produced with the support of the Big Ideas Program, a BJC HealthCare and Washington University internal grant program, hosted by the Healthcare Innovation Lab and the Institute for Informatics.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (AG, RF, PW, CC, KC, NM); acquisition of data (NM); analysis and interpretation of data (AG, RF, PW, CC, KC, NM); drafting of the manuscript (AG, PW, NM); critical revision of the manuscript for important intellectual content (RF, PW, CC, KC, NM); statistical analysis (AG); provision of patients or study materials (NM); obtaining funding (PW, NM); administrative, technical, or logistic support (CC, NM); and supervision (RF, PW, CC).

Address Correspondence to: Aixia Guo, PhD, Washington University School of Medicine, 600 S Taylor Ave, Ste 102, St Louis, MO 63110. Email: aixia.guo@wustl.edu.

REFERENCES

1. Institute of Medicine. Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. The National Academies Press; 2015.

2. Heyland DK, Dodek P, Rocker G, et al; Canadian Researchers End-of-Life Network (CARENET). What matters most in end-of-life care: perceptions of seriously ill patients and their family members. CMAJ. 2006;174(5):627-633. doi:10.1503/cmaj.050626

3. Gidwani-Marszowski R, Asch SM, Mor V, et al. Health system and beneficiary costs associated with intensive end-of-life medical services. JAMA Netw Open. 2019;2(9):e1912161. doi:10.1001/jamanetworkopen.2019.12161

4. Smith S, Brick A, O’Hara S, Normand C. Evidence on the cost and cost-effectiveness of palliative care: a literature review. Palliat Med. 2014;28(2):130-150. doi:10.1177/0269216313493466

5. Wright AA, Keating NL, Ayanian JZ, et al. Family perspectives on aggressive cancer care near the end of life. JAMA. 2016;315(3):284-292. doi:10.1001/jama.2015.18604

6. Palliative care. World Health Organization. Accessed July 22, 2020. http://www.who.int/cancer/palliative/definition/en/

7. Kavalieratos D, Corbelli J, Zhang D, et al. Association between palliative care and patient and caregiver outcomes: a systematic review and meta-analysis. JAMA. 2016;316(20):2104-2114. doi:10.1001/jama.2016.16840

8. Temel JS, Greer JA, Muzikansky A, et al. Early palliative care for patients with metastatic non-small-cell lung cancer. N Engl J Med. 2010;363(8):733-742. doi:10.1056/NEJMoa1000678

9. O’Connor NR, Moyer ME, Behta M, Casarett DJ. The impact of inpatient palliative care consultations on 30-day hospital readmissions. J Palliat Med. 2015;18(11):956-961. doi:10.1089/jpm.2015.0138

10. Chen CY, Thorsteinsdottir B, Cha SS, et al. Health care outcomes and advance care planning in older adults who receive home-based palliative care: a pilot cohort study. J Palliat Med. 2015;18(1):38-44. doi:10.1089/jpm.2014.0150

11. May P, Normand C, Cassel JB, et al. Economics of palliative care for hospitalized adults with serious illness: a meta-analysis. JAMA Intern Med. 2018;178(6):820-829. doi:10.1001/jamainternmed.2018.0750

12. Morrison RS, Penrod JD, Cassel JB, et al; Palliative Care Leadership Centers’ Outcomes Group. Cost savings associated with US hospital palliative care consultation programs. Arch Intern Med. 2008;168(16):1783-1790. doi:10.1001/archinte.168.16.1783

13. Singer AE, Meeker D, Teno JM, Lynn J, Lunney JR, Lorenz KA. Symptom trends in the last year of life from 1998 to 2010: a cohort study. Ann Intern Med. 2015;162(3):175-183. doi:10.7326/M13-1609

14. Gill TM, Han L, Leo-Summers L, Gahbauer EA, Allore HG. Distressing symptoms, disability, and hospice services at the end of life: prospective cohort study. J Am Geriatr Soc. 2018;66(1):41-47. doi:10.1111/jgs.15041

15. Pantilat SZ, O’Riordan DL, Dibble SL, Landefeld CS. Longitudinal assessment of symptom severity among hospitalized elders diagnosed with cancer, heart failure, and chronic obstructive pulmonary disease. J Hosp Med. 2012;7(7):567-572. doi:10.1002/jhm.1925

16. Lee RY, Brumback LC, Sathitratanacheewin S, et al. Association of physician orders for life-sustaining treatment with ICU admission among patients hospitalized near the end of life. JAMA. 2020;323(10):950-960. doi:10.1001/jama.2019.22523

17. Hawley P. Barriers to access to palliative care. Palliat Care. Published online February 20, 2017. doi:10.1177/1178224216688887

18. Harrison JD, Young JM, Price MA, Butow PN, Solomon MJ. What are the unmet supportive care needs of people with cancer? a systematic review. Support Care Cancer. 2009;17(8):1117-1128. doi:10.1007/s00520-009-0615-5

19. Hanson LC, Eckert JK, Dobbs D, et al. Symptom experience of dying long-term care residents. J Am Geriatr Soc. 2008;56(1):91-98. doi:10.1111/j.1532-5415.2007.01388

20. Kelley AS, Meier DE. The role of palliative care in accountable care organizations. Am J Manag Care. 2015;21(spec no 6):SP212-SP214.

21. Claffey TF, Agostini JV, Collet EN, Reisman L, Krakauer R. Payer-provider collaboration in accountable care reduced use and improved quality in Maine Medicare Advantage plan. Health Aff (Millwood). 2012;31(9):2074-2083. doi:10.1377/hlthaff.2011.1141

22. Kerr CW, Donohue KA, Tangeman JC, et al. Cost savings and enhanced hospice enrollment with a home-based palliative care program implemented as a hospice–private payer partnership. J Palliat Med. 2014;17(12):1328-1335. doi:10.1089/jpm.2014.0184

23. Cassel BJ, Kerr KM, McClish DK, et al. Effect of a home-based palliative care program on healthcare use and costs. J Am Geriatr Soc. 2016;64(11):2288-2295. doi:10.1111/jgs.14354

24. Colaberdino V, Marshall C, DuBose P, Daitz M. Economic impact of an advanced illness consultation program within a Medicare Advantage plan population. J Palliat Med. 2016;19(6):622-625. doi:10.1089/jpm.2015.0423

25. Yosick L, Crook RE, Gatto M, et al. Effects of a population health community-based palliative care program on cost and utilization. J Palliat Med. 2019;22(9):1075-1081. doi:10.1089/jpm.2018.0489

26. Watanabe SM, Faily V, Mawani A, et al. Frequency, timing, and predictors of palliative care consultation in patients with advanced cancer at a tertiary cancer center: secondary analysis of routinely collected health data. Oncologist. 2020;25(8):722-728. doi:10.1634/theoncologist.2019-0384

27. Seaman JB, Barnato AE, Sereika SM, Happ MB, Erlen JA. Patterns of palliative care service consultation in a sample of critically ill ICU patients at high risk of dying. Heart Lung. 2017;46(1):18-23. doi:10.1016/j.hrtlng.2016.08.008

28. Kelley AS, Covinsky KE, Gorges RJ, et al. Identifying older adults with serious illness: a critical step toward improving the value of health care. Health Serv Res. 2017;52(1):113-131. doi:10.1111/1475-6773.12479

29. Courtright KR, Madden V, Gabler NB, et al. Rationale and design of the Randomized Evaluation of Default Access to Palliative Services (REDAPS) trial. Ann Am Thorac Soc. 2016;13(9):1629-1639. doi:10.1513/AnnalsATS.201604-308OT

30. Pirovano M, Maltoni M, Nanni O, et al. A new palliative prognostic score: a first step for the staging of terminally ill cancer patients. Italian Multicenter and Study Group on Palliative Care. J Pain Symptom Manage. 1999;17(4):231-239. doi:10.1016/s0885-3924(98)00145-6

31. Spettell CM, Rawlins WS, Krakauer R, et al. A comprehensive case management program to improve palliative care. J Palliat Med. 2009;12(9):827-832. doi:10.1089/jpm.2009.0089

32. Courtright KR, Chivers C, Becker M, et al. Electronic health record mortality prediction model for targeted palliative care among hospitalized medical patients: a pilot quasi-experimental study. J Gen Intern Med. 2019;34(9):1841-1847. doi:10.1007/s11606-019-05169-2

33. Cai X, Perez-Concha O, Coiera E, et al. Real-time prediction of mortality, readmission, and length of stay using electronic health record data. J Am Med Inform Assoc. 2016;23(3):553-561. doi:10.1093/jamia/ocv110

34. Delahanty RJ, Kaufman D, Jones SS. Development and evaluation of an automated machine learning algorithm for in-hospital mortality risk adjustment among critical care patients. Crit Care Med. 2018;46(6):e481-e488. doi:10.1097/CCM.0000000000003011

35. Avati A, Jung K, Harman S, Downing L, Ng A, Shah NH. Improving palliative care with deep learning. BMC Med Inform Decis Mak. 2018;18(suppl 4):122. doi:10.1186/s12911-018-0677-8

36. Sahni N, Simon G, Arora R. Development and validation of machine learning models for prediction of 1-year mortality utilizing electronic medical record data available at the end of hospitalization in multicondition patients: a proof-of-concept study. J Gen Intern Med. 2018;33(6):921-928. doi:10.1007/s11606-018-4316-y

37. Sahni N, Tourani R, Sullivan D, Simon G. min-SIA: a lightweight algorithm to predict the risk of 6-month mortality at the time of hospital admission. J Gen Intern Med. 2020;35(5):1413-1418. doi:10.1007/s11606-020-05733-1

38. Avati A, Jung K, Harman S, Downing L, Ng A, Shah NH. Improving palliative care with deep learning. In: Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE; 2017:311-316. doi:10.1109/BIBM.2017.8217669

39. Mikolov T, Corrado G, Chen K, Dean J. word2vec. Proc Int Conf Learn Represent (ICLR 2013). 2013.

40. Kingma DP, Ba J. Adam: a method for stochastic optimization. arXiv. Published online December 22, 2014. https://arxiv.org/abs/1412.6980

41. Reimers N, Gurevych I. Optimal hyperparameters for deep LSTM-networks for sequence labeling tasks. arXiv. Published online July 21, 2017. Updated August 16, 2017. https://arxiv.org/abs/1707.06799

42. Kononenko I, Kukar M. Machine learning basics. In: Kononenko I, Kukar M. Machine Learning and Data Mining. Horwood Publishing Limited; 2007:59-105. doi:10.1533/9780857099440.59

43. Ho TK. Random decision forests. In: Proceedings of the 3rd International Conference on Document Analysis and Recognition. IEEE; 1995. doi:10.1109/ICDAR.1995.598994

44. Model-building strategies and methods for logistic regression. In: Hosmer D Jr, Lemeshow S, Sturdivant RX. Applied Logistic Regression. 3rd ed. John Wiley & Sons; 2013. doi:10.1002/0471722146.ch4

45. Han J, Moraga C. The influence of the sigmoid function parameters on the speed of backpropagation learning. In: Mira J, Sandoval F, eds. From Natural to Artificial Neural Computation: Lecture Notes in Computer Science. Springer; 1995. doi:10.1007/3-540-59497-3_175

46. Nair V, Hinton GE. Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning. International Machine Learning Society; 2010:807-814. https://dl.acm.org/doi/10.5555/3104322.3104425

47. Bao W, Yue J, Rao Y. A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PLoS One. 2017;12(7):e0180944. doi:10.1371/journal.pone.0180944

48. Greff K, Srivastava RK, Koutnik J, Steunebrink BR, Schmidhuber J. LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst. 2017;28(10):2222-2232. doi:10.1109/TNNLS.2016.2582924

49. How DNT, Loo CK, Sahari KSM. Behavior recognition for humanoid robots using long short-term memory. Int J Adv Robot Syst. Published online October 26, 2016. doi:10.1177/1729881416663369

50. Zeltzer D, Balicer RD, Shir T, Flaks-Manov N, Einav L, Shadmi E. Prediction accuracy with electronic medical records versus administrative claims. Med Care. 2019;57(7):551-559. doi:10.1097/MLR.0000000000001135

51. Smith G, Bernacki R, Block S. The role of palliative care in population management and accountable organizations. J Palliat Med. 2015;18(6):486-494. doi:10.1089/jpm.2014.0231

52. Moons KGM, Altman DG, Reitsma JB, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med. 2015;162(1):W1-W73. doi:10.7326/M14-0698

53. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):1148-1154. doi:10.1377/hlthaff.2014.0352

54. Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169(12):866-872. doi:10.7326/M18-1990

55. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342

Related Videos
Shawn Tuma, JD, CIPP/US, cybersecurity and data privacy attorney, Spencer Fane LLP
Will Shapiro, vice president of data science, Flatiron Health
Will Shapiro, vice president of data science, Flatiron Health
Kathy Oubre, MS, Pontchartrain Cancer Center
Emily Touloukian, DO, Coastal Cancer Center
dr krystyn van vliet
dr mitzi joi williams
Stephen Speicher, MD, MS
dr marisa mcginley
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.