This study identifies several factors shared by locally defined delivery system innovations that have been shown to reduce service use and lower health care spending.
Objectives: This study examines 14 independent and diverse health care interventions funded under the second round of Health Care Innovation Awards by CMS to determine if any organizational, model, or implementation features were strongly associated with the programs’ estimated impacts on total expenditures, hospitalizations, or emergency department visits.
Study Design: We estimated program impacts using awardee-specific difference-in-differences models based on Medicare and Medicaid enrollment and claims data for treatment and matched comparison groups from 2012 to 2018.
Methods: We used 2 analytic approaches to identify program features associated with favorable impacts. The first method identified program characteristics that were common among programs that had estimated reductions in costs and service use and uncommon among those that did not. The second approach compared median impacts among awardees with a given distinguishing feature with median impacts among awardees that lacked the characteristic.
Results: Of the 23 program features examined, 7 were associated with favorable estimated impacts: 3 intervention components (behavioral health, telehealth, and health information technology) and 4 program design and organizational characteristics (having prior experience implementing similar programs, targeting patients with substantial nonmedical needs in addition to medical problems, being focused on individual patient care rather than transforming provider practice, and using nonclinical staff as frontline providers of the intervention).
Conclusions: Innovative health care service delivery models with 2 or more of these 7 identified features were more likely than programs without them to reduce Medicare and Medicaid beneficiaries’ needs for costly health care services.
Am J Manag Care. 2021;27(11):e378-e385. https://doi.org/10.37765/ajmc.2021.88781
This study identifies several factors shared by locally defined delivery system innovations that have been shown to reduce service use and lower health care spending.
Public and private payers for health care constantly search for effective ways to reduce patients’ need for expensive hospitalizations and emergency department (ED) visits. The Medicare Payment Advisory Commission reported in 2019 that approximately 13% of Medicare hospitalizations and 18% of ED visits were potentially avoidable.1 In the 2 decades before the Patient Protection and Affordable Care Act of 2010, CMS’ efforts to reduce costs focused mainly on improving care for Medicare beneficiaries with chronic illnesses because they account for a large share of total expenditures. However, very few of the models tested had meaningful impacts on hospital use or expenditures, and even among those that did, few generated enough savings to cover program costs.
Over the past decade, CMS has tried a different approach to improving care while lowering costs. Section 1115A of the Social Security Act authorizes CMS to test delivery system and payment reforms across a wide range of intervention settings and populations. Again, however, study findings to date indicate that these efforts produce little or no savings.2 To receive broad input from the health care field on innovative solutions, in 2011 CMS released a funding opportunity announcement for the Health Care Innovation Awards (HCIA). Under this initiative, CMS made awards to organizations to test whether refining and broadening the innovative approaches they already used or had tested would yield improvements and efficiencies in delivering health care. CMS administered 2 rounds of funding, with 107 grants awarded in round 1 and 39 in round 2.3 Table 1 highlights the wide variation in interventions provided, populations served, and types of organizations involved in the HCIA round 2 initiative.4
In this paper, we identify programs funded under round 2 that were found to have reduced health care expenditures or hospital inpatient and ED service use, and we look for factors that distinguish them from programs that did not produce these desired effects. The diversity of programs and outcomes funded provides a unique opportunity to determine whether any program or awardee characteristics or implementation practices were associated with a greater likelihood of reducing Medicare and Medicaid beneficiaries’ needs for costly health care services. The diversity and small number of programs make it unlikely that a traditional meta-regression analysis would yield meaningful results. Instead, we use descriptive analyses to identify associations between program features expected to influence the effectiveness of an intervention and program impacts on health care expenditures and service use. Thus, the findings are correlational, not causal, associations. They nonetheless address a major gap in the literature concerning characteristics of innovative health care delivery programs that are potentially important to reducing unnecessary service use and costs. This study uses data from the HCIA round 2 programs only; it does not address findings in the literature concerning savings from other innovative programs in health care delivery or payment systems, such as accountable care organizations or initiatives to improve primary care.
We conducted an impact evaluation of 14 of the 39 programs funded by CMS. The other 25 programs could not be credibly evaluated due to small sample sizes, potentially severe selection bias on unmeasured factors, or a program’s focus on affecting outcomes unrelated to hospitalizations or expenditures. The implementation evaluation for these 14 programs collected information on awardee and program characteristics, changes in intervention design that might have occurred during the award period, and barriers to and facilitators of implementation effectiveness. We collected information through systematically reviewing awardees’ quarterly progress reports and program data, as well as by interviewing program administrators and frontline staff during each of the 3 years of the cooperative agreements (2015-2017).4
The impact evaluations relied on an analysis of Medicare or Medicaid enrollment and claims data from 2013 to 2018 (1 year before enrollment and up to 3 years after). Two of the 14 programs implemented a randomized controlled trial (RCT). The other 12 programs required a quasi-experimental approach, comparing outcomes for program participants (or individuals eligible for the program) with those of matched comparison groups. Most of the impact evaluations used a difference-in-differences model, comparing changes in outcomes among the treatment group after vs before enrollment with changes in outcomes among the matched comparison group over the same period. However, we relied on a cross-sectional design for 4 programs, either because the program implemented an RCT (2 programs) or because the preenrollment trends in outcomes were interrupted by a seminal event (for example, entering a nursing home or hospice) that made individuals eligible for the program, rendering outcomes prior to that event largely irrelevant to the follow-up period. Despite the variation in analytic models, we standardized the evaluations as much as possible to compare estimated impacts across programs.
The evaluations of 7 programs relied on Medicare beneficiaries only, 5 relied on Medicaid beneficiaries only, and 2 included both Medicare and Medicaid beneficiaries. Participants who were dually eligible for Medicare and Medicaid were included as Medicare patients in our analyses, because Medicare was the primary payer for the 3 key outcomes.
Outcomes and Criteria Used to Determine Program Success
We estimated impacts on 3 outcomes when data were available: (1) total Medicare or Medicaid expenditures per beneficiary per month, (2) number of hospital admissions per 1000 beneficiaries, and (3) number of ED visits per 1000 beneficiaries. We also estimated impacts over multiple follow-up periods (1, 2, or 3 years after enrollment, separately and cumulatively) and different populations (all participants, plus key subgroups based on the awardee’s theory of action). Given the multiple specifications, we developed and applied 3 rules to determine which programs had sufficient evidence to conclude they achieved some favorable effects.
Study sample. If the estimated impacts for 1 or more outcomes were statistically significant for the full study sample, those results were used to assess program impact. If the results were not statistically significant for the full sample but were significant and favorable for a subgroup for which the intervention was expected to have larger impacts (eg, high-risk cases), the program intervention was identified as favorable for that subgroup. We reported impact estimates for this subgroup for all outcomes.
Evaluation follow-up period. If the results were similar across each 12-month follow-up period, the cumulative results over the full follow-up period were used to assess program impact. If the results differed across follow-up periods, the follow-up period that was most consistent with the awardee’s theory of action was used instead.
Impact results. Programs that had at least 1 favorable and statistically significant impact estimate for a given outcome, time period, or subgroup that was consistent with the awardee’s theory of action were identified as having evidence of a favorable impact. However, this favorable assessment was rejected if impact estimates for either of the other outcomes, or for other time periods or subgroups, were adverse and either large or statistically significant.
Although the rules for assessing impacts were the same for each program, they led to focusing on different outcomes, follow-up periods, and subgroups for each awardee. The goal was to identify programs that had some convincing evidence of reductions in Medicare or Medicaid expenditures or service use even if those programs did not have statistically significant findings for all outcomes or for all enrollees over the full program period.
Program Features Examined for Association With Impacts
The synthesis analysis was limited to 23 features identified from the literature, discussions with program staff, and the evaluation of the first round of HCIA funding as being potentially associated with impacts. We organized the characteristics into 3 categories: (1) 4 features related to the programs’ intervention components, (2) 9 features related to program and awardee characteristics, and (3) 10 features related to implementation effectiveness. Table 2 lists the program features used in the analysis and the number of awardees with each characteristic.
Methods Used to Identify Associations
We used 2 methods to analyze the association of program features with favorable outcomes. The first distinction method identified program characteristics that were common among the 4 programs with estimated reductions in costs or service use and relatively uncommon among the other 9 programs without favorable estimates.5 (We excluded 1 program from this method because its small sample size prevented us from determining whether the program had favorable effects.) We defined distinguishing characteristics as those present in at least 3 of the 4 programs with favorable impact estimates and in less than half (4 or fewer) of the 9 programs that did not have clearly favorable estimated impacts.
The second method compared the median estimated impact among awardees with a given feature with the median impact among awardees that lacked the characteristic. This method provides estimates of the magnitude of the difference in impacts between programs with vs without a given feature. It also helps corroborate the results of the distinction method and identifies the outcome(s) for which estimated impacts differed substantially. We identified those characteristics for which favorable median estimated impacts were at least –5% among programs with the feature and at least 2.5 percentage points more favorable than the median for programs without the feature. We conducted the median impact assessments separately for each outcome over the 14 programs.
Both of these approaches are descriptive and do not imply causal links between the program features and impacts. There are too few programs and too many characteristics to estimate regressions that could help distinguish correlation from causal relationships.
Based on the criteria above, we determined that 4 of the 14 programs had evidence of favorable impacts on 1 or more of the core outcomes for at least one 12-month follow-up period (Avera, Montefiore, New York City Health + Hospitals, and University of Illinois Chicago) (Table 3). Three of the 4 programs had statistically significant estimated reductions in ED visits, ranging from 7% to 14%. One awardee also had a statistically significant estimated reduction in hospital admissions of 6%; the other 3 had favorable but not statistically significant estimated impacts on this outcome. These estimated effects on major cost drivers led to statistically significant estimated reductions in spending among important subgroups of eligible beneficiaries in the 2 programs with available expenditure data. Given the statistically significant estimated reductions in ED visits and hospital admissions for 1 awardee and the large estimated effects on ED visits and hospitalizations for another (only the ED estimate was statistically significant), it is likely that these 2 programs also reduced total cost of care for their Medicaid participants. For 3 of the 4 programs, the favorable effects were limited to a subgroup of beneficiaries expected to receive the greatest benefit from the program intervention. None of the other 10 programs met the criteria for having evidence of favorable impacts. Although 2 other awardees each had 1 statistically significant favorable effect, they also had a statistically significant adverse impact on 1 of the other outcomes, suggesting that the favorable estimated effect might be due to a weak comparison group or a true effect counterbalanced by a true adverse effect (see rule 3).
Of the 23 program features we considered, we found 8 to be associated with favorable impacts in the distinction analysis (Table 4), and the median analysis confirmed and quantified the association for 7 of these (Table 5). The 7 features include 3 intervention components and 4 program design or awardee characteristics, described in the next section. Hospital-based interventions were more prevalent among programs with evidence of favorable impacts, but the median estimated improvements in outcomes for hospital-based programs were smaller than those for nonhospital programs. Thus, the greater relative prevalence of hospital-based programs among the 4 programs with favorable impacts appears to be due to other factors that are correlated with hospital sponsorship.
Associations Between Program Components and Impact
All 3 programs that relied on integrating behavioral health services with physical health services had favorable impacts, and median estimated impacts for the 3 programs were substantially larger than median impacts for the other 11 programs on all 3 outcomes. For patients with depression, anxiety, or behavioral problems such as addictions, physical health problems often cannot be treated effectively unless these other problems are adequately addressed. For example, 1 program with evidence of favorable impacts provided enhanced care coordination and a range of mental health services to children and young adults with complex or chronic conditions. These services evolved from educating participants during the first year to conducting regular mental health assessments, consulting with care coordination staff and participants’ health care providers, and providing mental health services and referrals in the second year. Nonetheless, a systematic review of the literature found that programs integrating behavioral health with primary care often improve care but rarely show savings.6
The 6 awardees that relied on telehealth as a key intervention component also had more favorable median estimated effects on all 3 outcomes, especially for hospitalizations and total expenditures. The distinguishing feature of the telehealth services provided by programs with evidence of favorable effects appears to be adopting systems to remotely monitor and respond to patients’ clinical needs. For example, 1 long-term care provider expanded its existing telehealth model to provide both transitional care coordination for residents discharged from a skilled nursing facility and around-the-clock telehealth consults with physicians and specialists for long-term skilled nursing facility residents and the staff managing their care.
Having health information technology (IT) as a principal intervention component was also associated with more favorable estimated impacts, but not as strongly as the behavioral health or telehealth components. These programs’ interventions relied heavily on computer hardware, software, or infrastructure to provide clinicians and other care providers access to important patient information or treatment options in a timely manner and to share this information with other providers. For example, 1 awardee with evidence of favorable estimated impacts implemented a health IT component that included a patient registry to collect and track participants’ screening scores, between-visit follow-up communications, and participants’ care plans and goals. Participants could also subscribe to an interactive voice response smartphone application that enabled them to complete follow-up monitoring measures, receive appointment reminders and health education messages, and communicate with patient educators. The increased amount and timeliness of information exchanges appear to have enabled several programs to reduce patients’ need for ED visits and total expenditures substantially.
Associations Between Awardee and Program Characteristics and Impacts
Three program characteristics were associated with larger estimated reductions in ED visits; only 1 was associated with larger median reductions in hospitalizations or expenditures. Awardees that had previous experience implementing similar programs likely had better planning and early identification and amelioration of potential implementation barriers, such as establishing strategic partnerships and addressing staffing needs and requirements. Staff with greater experience with change might have been more comfortable adapting their workflows to accommodate innovations. Prior experience might also facilitate buy-in among providers and other factors associated with delivering quality services. For example, for 1 program with favorable estimates, most of its participating practices had prior experience providing on-site integrated behavioral health. It also had implemented a measurement-based approach to care that enabled primary care physicians and behavioral health staff to work together in the primary care setting to provide behavioral health care services and referrals.
Awardees that served predominantly socially fragile populations had more favorable estimated impacts on all 3 outcomes than those that did not serve such populations, but only the ED visit difference was sizable. Socially fragile populations are those at higher risk for disease progression because of social circumstances or barriers. For example, homeless and indigent populations, individuals with language barriers or transportation issues, and individuals with major cognitive or mental health problems are considered socially fragile. One of the awardees with evidence of favorable impacts serves as the public safety net in New York City’s health care system. It provided ED care management to frequent users of its emergency care services. However, evidence from a recent RCT of a highly regarded program targeted at this population showed no reductions in hospitalizations or costs. That study concluded that the ability to achieve favorable effects likely depends on the severity of the problems (eg, drug addiction, homelessness, mental illness) of the target population, the availability of community services needed to support them, and a long enough follow-up period to achieve those results.7
Programs whose interventions were primarily patient focused had substantially more favorable median estimated effects on total expenditures and hospitalizations than provider- or facility-focused interventions. These programs might have been more effective because they addressed the barriers that individual patients faced in reducing their need for extensive and expensive care, rather than trying to change providers’ behaviors or introduce new practice protocols. This is not to say that provider-focused interventions are unimportant. Indeed, a rich body of literature acknowledges the importance of reorganizing processes of care (including 1 of the programs with evidence of favorable effects in our study). However, most of these studies also highlight the challenges that practice transformation strategies face in realizing their potential to improve quality of care.8,9
Finally, programs that relied on nonclinical frontline staff, such as community health workers and social workers, had slightly more favorable median estimated effects on ED visits than programs that used clinical staff in this role. The more favorable impact estimates might be due to the greater ability of social workers or community support providers to address the nonmedical problems that prevent patients from adhering to physicians’ recommendations. This is especially true among programs that served socially fragile patients, where the association with impacts is much stronger. The nonmedical problems could include difficulties such as obtaining transportation to follow-up medical appointments or access to affordable medications. For example, 1 awardee used community health workers to conduct initial assessments of participants in their homes. The community health workers then connected participants and their families to relevant social service agencies and coordinated physical and mental health services, including the enhanced mental health services provided by the program.
Associations Between Implementation Effectiveness and Impacts
None of the measures of implementation effectiveness was meaningfully or consistently associated with more favorable median estimated impacts. This lack of association between stronger implementation and more favorable impacts is consistent with the results from the distinction method, which showed no association between these features and whether a program had strong evidence of favorable effects. Several factors could cause the lack of association between implementation and impacts. Awardees that take program monitoring and improvement seriously might be more likely to have effects but also more likely to acknowledge implementation problems. The implementation effectiveness measures used might not be equally relevant across this diverse group of interventions and settings and might produce inconsistent results over time.
The study has several limitations. First, the findings are based on a descriptive comparison of the features of programs that did and did not have impacts, and therefore they cannot be used to draw causal inferences. Given the small number of programs, we cannot separate the independent effect of each program feature on program impacts, nor can we ensure that they are not due to correlation with unobserved factors. Despite this limitation, the results are potentially useful for payers seeking to identify new approaches to reduce preventable service use and costs, because there is almost no literature on the details of which program components or features work best.
Second, our estimated impacts for some of the programs may suffer from selection bias, because only 2 were RCTs. However, we minimized the likelihood that the other estimates suffered from selection bias by limiting the ones included in this analysis to those for which we could control for possible selection bias. We did this by estimating impacts over all patients eligible for the program wherever possible, using an intent-to-treat approach. In other cases, we ensured that we had strong propensity score matches, with parallel trends in outcomes during the preprogram periods, and used a difference-in-differences methodology to estimate impacts.
A third limitation is that the favorable estimates could be due to chance rather than true effects, given the large number of impacts estimated for subgroups, time periods, outcome measures, and programs. We guarded against such concerns by looking across estimates for patterns that were internally consistent and consistent with the program’s theory of action.
Fourth, our measures of implementation effectiveness are weak, because (1) programs differed in the standards they used in assessing their own challenges and performance, (2) awardees implemented changes over the course of the program, and (3) we had to rely on responses from program staff to our questions rather than our own direct observation of program operations. Furthermore, the degree to which an implementation factor was likely to affect program impacts varied widely across interventions.
A fifth limitation was that the lack of evidence of favorable impacts could be due to the limited time frame that programs had to demonstrate such impacts. Sixth, none of the programs used financial incentives during the study period to encourage favorable effects, so it was not possible to assess whether offering such incentives (or imposing penalties) would have led to different results. Finally, this study did not report findings on whether any estimated reductions in health care expenditures would cover program costs, due to the absence of good data on what those costs would be if the program were sustained or expanded. Furthermore, the CIs around those net savings estimates were very wide, even without accounting for the unknown variance in the operating cost estimates.
Each of the 4 programs with favorable estimated impacts, and none of the 10 programs without such evidence, had 3 distinct features. They (1) served a socially fragile population; (2) had experience with the intervention before HCIA round 2; and (3) addressed participants’ nonmedical needs, either by including a behavioral health component or relying on nonclinical staff to deliver intervention services. Furthermore, among the 4 programs with favorable impacts, all but 1 had all 3 of the intervention components associated with larger impacts: telehealth, health IT, and behavioral health. None of the programs without impacts had this combination of intervention components. However, 1 of the programs with evidence of favorable effects had none of these intervention components, indicating that these components were not essential for program success. That awardee reduced hospitalizations and ED visits by relying on nonclinical staff such as social workers or community workers to address participants’ social needs.
These results are similar on several dimensions to the findings from the evaluation of the first round of HCIA awards. Using meta-regression analysis, Smith and colleagues identified 5 factors associated with reductions in total cost of care across 43 delivery system innovations.3 Although none of the results were statistically significant given the large CIs, the researchers found that innovations that used health IT, employed community health workers, or targeted clinically fragile patients achieved the greatest cost savings; these were followed by programs with a medical home or behavioral health component that were often designed to address patients’ nonmedical needs. In contrast to our findings, the round 1 evaluators found that estimated savings were close to zero for programs that enrolled socially fragile patients. In addition, the round 1 evaluation found that programs with telehealth components exhibited substantial increases in spending compared with other programs.
Our results are also consistent with those of other studies. Several rigorous evaluations have shown the effectiveness of care management models that use multidisciplinary teams to help patients address unmet needs for social services by connecting them with medical and nonmedical community-based resources. These programs have been shown to reduce the use of health care services (such as hospitalizations) and lower costs.10-13 Other studies have assessed the effect of interventions that relied on community health workers or social workers connecting high-risk patients with social services. Some of these interventions led to a reduction in ED visits, hospitalizations, and Medicaid spending.14-20 Telehealth has also been shown to be an effective way to deliver care, albeit with mixed evidence for whether it can reduce costs and service use.21-23 Indeed, 2 of the programs without evidence of favorable effects in our study had a telehealth component.
This synthesis analysis of the 14 evaluable interventions funded under HCIA round 2 suggests that programs with the features identified in this study might have a greater likelihood of more quickly achieving CMS’ goals of reducing Medicare and Medicaid beneficiaries’ needs for costly health care services. The results do not imply that only programs with these features can be successful in reducing health care expenditures and preventable use of expensive services, but such program features appear to increase the likelihood of success. The results also could be useful for other payers, plans, and quality and payment initiatives with similar goals, such as commercial insurers, managed care organizations, and accountable care organizations. However, the estimated gross savings were modest. Comparison with the awardees’ crude estimates of operating costs per patient (not shown) suggested that net savings, although positive for each program, likely would be relatively small unless more cost-efficient methods for implementing the intervention were identified.
Author Affiliations: Mathematica, Inc (RB, BG, DW, SD), Princeton, NJ; RTI International (JD), Research Triangle Park, NC.
Source of Funding: Center for Medicare and Medicaid Innovation, CMS, contract No. HHSM-500-2014-00034I
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (RB, JD, BG, DW, SD); analysis and interpretation of data (RB, JD, BG, DW, SD); drafting of the manuscript (RB, JD, BG, DW, SD); critical revision of the manuscript for important intellectual content (RB, JD, BG, DW); and statistical analysis (RB, JD, BG, SD).
Address Correspondence to: Boyd Gilman, PhD, Mathematica, Inc, 955 Massachusetts Ave, Ste 801, Cambridge, MA 02138. Email: BGilman@mathematica-mpr.com.
1. Feng Z, Silver B, Segelman M, et al. Developing risk-adjusted avoidable hospitalization and emergency department visits quality measures. Medicare Payment Advisory Commission. August 2019. Accessed July 15, 2020. http://www.medpac.gov/docs/default-source/contractor-reports/august2019_riskadjusted_ah_av_measures_contractor_sec.pdf?sfvrsn=0
2. Peikes D, Taylor EF, O’Malley AS, Rich EC. The changing landscape of primary care: effects of the ACA and other efforts over the past decade. Health Aff (Millwood). 2020;39(3):421-428. doi:10.1377/hlthaff.2019.01430
3. Smith KW, Bir A, Freeman NLB, Koethe BC, Cohen J, Day TJ. Impact of health care delivery system innovations on total cost of care. Health Aff (Millwood). 2017;36(3):509-515. doi:10.1377/hlthaff.2016.1308
4. Gilman B, McCall N, Bogen K, et al. Evaluation of the round two health care innovation awards (HCIA R2): third annual report. CMS. June 2018. Accessed July 15, 2020. https://downloads.cms.gov/files/cmmi/hcia2-yr3evalrpt.pdf
5. Brown RS, Peikes D, Peterson G, Schore J, Razafindrakoto CM. Six features of Medicare coordinated care demonstration programs that cut hospital admissions of high-risk patients. Health Aff (Millwood). 2012;31(6):1156-1166. doi:10.1377/hlthaff.2012.0393
6. Reed SJ, Shore KK, Tice JA. Effectiveness and value of integrating behavioral health into primary care. JAMA Intern Med. 2016;176(5):691-692. doi:10.1001/jamainternmed.2016.0804
7. Noonan K. Disappointing randomized controlled trial results show a way forward on complex care in Camden and beyond. Health Affairs. January 9, 2020. Accessed July 15, 2020. https://www.healthaffairs.org/do/10.1377/hblog20200102.864819/full/
8. Crabtree BF, Nutting PA, Miller WL, et al. Primary care practice transformation is hard work: insights from a 15-year developmental program of research. Med Care. 2011;49(suppl 1):S28-S35. doi:10.1097/MLR.0b013e3181cad65c
9. Gill JM, Bagley B. Practice transformation? opportunities and costs for primary care practices. Ann Fam Med. 2013;11(3):202-205. doi:10.1370/afm.1534
10. Berkowitz SA, Terranova J, Hill C, et al. Meal delivery programs reduce the use of costly health care in dually eligible Medicare and Medicaid beneficiaries. Health Aff (Millwood). 2018;37(4):535-542. doi:10.1377/hlthaff.2017.0999
11. Boult C, Reider L, Leff B, et al. The effect of guided care teams on the use of health services: results from a cluster-randomized controlled trial. Arch Intern Med. 2011;171(5):460-466. doi:10.1001/archinternmed.2010.540
12. Counsell SR, Callahan CM, Clark DO, et al. Geriatric care management for low-income seniors: a randomized controlled trial. JAMA. 2007;298(22):2623-2633. doi:10.1001/jama.298.22.2623
13. Tsega M, Lewis C, McCarthy D, Shah T, Coutts K. Review of evidence for health-related social needs interventions. The Commonwealth Fund. July 1, 2019. Accessed February 15, 2021. https://www.commonwealthfund.org/sites/default/files/2019-07/COMBINED_ROI_EVIDENCE_REVIEW_7.15.19.pdf
14. Jack HE, Arabadjis SD, Sun L, Sullivan EE, Phillips RS. Impact of community health workers on use of healthcare services in the United States: a systematic review. J Gen Int Med. 2017;32(3):325-344. doi:10.1007/s11606-016-3922-9
15. Sevak P, Stepanczuk CN, Bradley KWV, et al. Effects of a community-based care management model for super-utilizers. Am J Manag Care. 2018;24(11):e365-e370.
16. Rowe JM, Rizzo VM, Shier Kricke G, et al. The Ambulatory Integration of the Medical and Social (AIMS) model: a retrospective evaluation. Soc Work Health Care. 2016;55(5):347-361. doi:10.1080/00981389.2016.1164269
17. Kangovi S, Mitra N, Norton L, et al. Effect of community health worker support on clinical outcomes of low-income patients across primary care facilities: a randomized clinical trial. JAMA Intern Med. 2018;178(12):1635-1643. doi:10.1001/jamainternmed.2018.4630
18. Kangovi S, Mitra N, Grande D, Long JA, Asch DA. Evidence-based community health worker program addresses unmet social needs and generates positive return on investment. Health Aff (Millwood). 2020;39(2):207-213. doi:10.1377/hlthaff.2019.00981
19. Woods ER, Bhaumik U, Sommer SJ, et al. Community asthma initiative: evaluation of a quality improvement program for comprehensive asthma care. Pediatrics. 2012;129(3):465-472. doi:10.1542/peds.2010-3472
20. Krieger J, Song L, Philby M. Community health worker home visits for adults with uncontrolled asthma: the HomeBASE trial randomized clinical trial. JAMA Intern Med. 2015;175(1):109-117. doi:10.1001/jamainternmed.2014.6353
21. Shigekawa E, Fix M, Corbett G, Roby DH, Coffman J. The current state of telehealth evidence: a rapid review. Health Aff (Millwood). 2018;37(12):1975-1982. doi:10.1377/hlthaff.2018.05132
22. Totten AM, Hansen RN, Wagner J, et al. Telehealth for acute and chronic care consultations. Agency for Healthcare Research and Quality comparative effectiveness review No. 216. April 24, 2019. Accessed February 15, 2021. https://effectivehealthcare.ahrq.gov/products/telehealth-acute-chronic/research
23. Lazur B, Sobolik L, King V. Telebehavioral health: an effective alternative to in-person care. Milbank Memorial Fund. October 15, 2020. Accessed February 15, 2021. https://www.milbank.org/publications/telebehavioral-health-an-effective-alternative-to-in-person-care/