Publication
Article
The American Journal of Managed Care
Author(s):
This study presents a methodology for forecasting demand of COVID-19 on health resources in an integrated health system.
ABSTRACT
Objectives: To build a model of local hospital utilization resulting from SARS-CoV-2 and to continuously update it with new data.
Study Design: Retrospective analysis of real performance resulting from a model deployed in a major regional health system.
Methods: Using hospitalization data from the Kaiser Permanente Mid-Atlantic States integrated care system during the period from March 10, 2020, through December 31, 2020, and a custom-developed genetic particle filtering algorithm, we modeled the SARS-CoV-2 outbreak in the mid-Atlantic region. This model produced weekly forecasts of COVID-19–related hospital admissions, which we then compared with actual hospital admissions over the same period.
Results: We found that the model was able to accurately capture the data-generating process (weekly mean absolute percentage error, 10.0%-48.8%; Anderson-Darling P value of .97 when comparing percentiles of observed admissions with the uniform distribution) once the effects of social distancing could be accurately measured in mid-April. We also found that our estimates of key parameters, including the reproductive rate, were consistent with consensus literature estimates.
Conclusions: The genetic particle filtering algorithm that we have proposed is effective at modeling hospitalizations due to SARS-CoV-2. The methods used by our model can be reproduced by any major health care system for the purposes of resource planning, staffing, and population care management to create an effective forecasting regimen at scale.
Am J Manag Care. 2022;28(3):124-130. https://doi.org/10.37765/ajmc.2022.88838
Takeaway Points
Global health systems have been severely taxed by the onset of COVID-19, the disease caused by SARS-CoV-2. The disease has forced health care management organizations to adapt and obtain new capabilities that had been considered outside the typical domain of the health care management space.1 One of the most critical requirements for health care providers has been the ability to independently forecast the impact of the outbreak on an organization’s resources. To create plans around resource allocation and utilization, the Kaiser Permanente Mid-Atlantic States (KPMAS) health system required accurate models of how the disease might spread within our community and how this would affect our hospital utilization. During the epidemic, a wide variety of models intended for forecasting demand have been produced. These models span a range of structures and fitting methodologies.2-9 We chose to employ a compartment modeling approach because of its ease of use, conceptual simplicity, proven effectiveness, and ability to flexibly specify complex models. Compartment models treat an epidemic as a system of differential equations in which patients move between discrete states. Compartment models require fitting algorithms to optimize the parameters used in their simulation forecasts. Particle filtering is one option to perform this training, with the significant advantage that this method can be trained online. Particle filtering has been used successfully in several epidemiological applications, including models of SARS-CoV-2.10
When translating an epidemic forecast into operational decision-making, it is critical to determine both the data inputs and outputs of the model. Many data inputs have been used for modeling the pandemic so far. Most modeling inputs include positive COVID-19 tests; unfortunately, testing volume can be strongly affected by regulatory changes in testing protocols and testing priorities.8,9 KPMAS has experienced several significant changes to its testing policies in response to policy directives, test availability, and patient demand. These changes caused significant inconsistencies, which made testing an unreliable data source for modeling. Another popular alternative is COVID-19 deaths, because death data are less likely to undergo significant changes in data acquisition or record keeping; however, deaths are less frequent and data capture is significantly delayed. Ultimately, hospital admissions proved to be the most effective data input for our model because hospitalization rate was a robust indicator on which to base a modeling strategy.
Using a custom package that fits compartment models on hospital admissions with a genetic particle filter, we have applied a reliable short-term forecasting framework that is currently used in KPMAS to assess and plan for the organization’s immediate needs.11 The primary output of this forecasting methodology—hospital admissions—has been used to plan an effective response for resource and personnel management across the region. Our objectives were to create a simple framework for constructing short-term forecasts of SARS-CoV-2 hospital admissions, give a realistic view into the real-time results that can be obtained by this framework, and provide a detailed comparison of this model’s parameter estimates compared with the best estimates available in other literature sources.
METHODS
Setting
Kaiser Permanente is the largest not-for-profit integrated health system in the United States, serving 12.4 million members nationwide and more than 770,000 members in the Mid-Atlantic region, representing Southern Maryland, the District of Columbia, and Northern Virginia.12 KPMAS members are a racially and socioeconomically diverse population. The mean age of members is 47.7 years, and 27.0% of members were 60 years or older as of March 1, 2020. A total of 53.9% of members were female.
Data
We included all members of the KPMAS health care system who had active membership as of February 7, 2020. We limited our analysis to adult members, defined in our analysis as having a birth date before March 1, 2002. This provided a total population of 613,268 patients. We rounded this number to 615,000 for population modeling purposes, which constituted a change of 0.3% in the assumed population size.
When constructing our models, our primary fitting criterion was admission to a hospital with a COVID-19 diagnosis. To facilitate modeling, we extracted daily hospital admissions data for each day from March 10, 2020, through December 31, 2020. When identifying hospital admissions for modeling, we considered only the first hospital admission experienced by each patient; we did not consider readmissions as part of our daily admission counts. There is significant complexity when considering questions of readmissions, as including them would require the creation of several new compartments. This study was ruled exempt by the KPMAS Institutional Review Board.
Model Definition and Fitting
We employed an extension to the Susceptible-Exposed-Infectious-Removed, or SEIR, framework that employed explicit compartments for hospitalization and quarantine. These compartments enabled us to fit the model using hospital admissions data with a long lag between infection and hospitalization. We fit our models using a custom implementation of a genetic particle filter. We fit our model each week starting on March 31 and concluding on December 22, 2020 (the last date on which we would have a full week of validation data). Please refer to Table 1, which displays our selected prior distributions and compartment definitions. We offer a more detailed explanation of our model definition and fitting process in the eAppendix (available at ajmc.com).
Analytical Methods
We employed several metrics to assess model quality. We first compared the model’s predictions of cumulative hospital admissions over the forecast period with the observed admissions over the period and calculated the mean absolute percentage error (MAPE) over each of the individual forecast particles. In addition to calculating the MAPE, we considered a percentile-percentile plot of the percentiles of daily hospital admissions with reference to their forecast distributions. We also examined several other statistics, particularly the attack rate and parameter values estimated by our model. For each of these statistics, we examined the posterior distribution obtained by the fitting method by extracting all values of the given parameter from particles active by the end of our fitting period. We computed our model’s attack rate as the percentage of the population in any compartment other than susceptible (ie, all members who had ever been exposed). We computed the model’s effective reproductive rate (Reff) as the product of the contact rate (β), the inverse of the period of Infectiousness (1 / γ), and the Susceptible rate (S / N).
RESULTS
Hospital Admissions
During the period from March 10, 2020, through December 31, 2020, we observed a cumulative 4301 hospital admissions for SARS-CoV-2. Of these patients, 48.9% were female. The mean age of this group was 61.8 years, and 56.8% of patients admitted to the hospital for SARS-CoV-2 were 60 years or older at the time of the admission.
Forecast Accuracy
In the first 2 forecast periods (beginning on March 31 and April 7), the forecast significantly overestimated hospital admissions (Figure 1). The median estimate for cumulative hospitalizations over the 2-week forecasting period was 2.61 times larger than the true admissions in the first week and 1.86 times larger in the second. Thereafter, however, forecasts improved (Figure 1). The true observed hospital admissions were within the 99% credible interval for every week thereafter except for the weeks of June 6 and October 27, and the MAPE was consistently low (below 50% in all weeks after the first 2 forecasts) (Table 2 and eAppendix Figure 10). This was especially important at the beginning of the second and third waves, which began in late June and late October, respectively. Our model was able to quickly compensate for each change (Figure 1, Figure 2, and Figure 3).
Parameter Estimates
We examined the posterior distributions observed when forecasting for the final period on December 22. We found a median incubation period of 3.2 days and a median infectious period of 3.6 days. We also estimated that patients quarantined for a mean of 8.9 days before being hospitalized and that the hospitalization rate was 5.3%, with a credible interval from 5.3% to 5.4%. Our model yielded a median attack rate estimate of 13.9%, with a 95% credible interval from 13.6% to 14.3%, by the final day of fitting on December 22, 2020.
We estimated that Reff was very high (3.1) in late March; however, the introduction of social distancing measures quickly reduced this to below 1.0 by our model run on April 14. Except for the week of May 5, we estimated that Reff remained below 1.0 until July. Our model estimated that Reff remained above 1.0 for approximately 4 weeks in July (corresponding to the “second wave” reported in the media as social distancing guidelines were relaxed) before collapsing back below 1.0 until October. Finally, we estimated that Reff was above 1.0 for each forecast date from October 20 through the end of 2020, except for the week of November 10 (which yielded an estimate of 0.96). Throughout this period, our estimate of Reff varied between a median estimate of 1.04 (the final 2 forecasts of December) and 1.30 (the week of November 24).
DISCUSSION
Model Contributions and Impacts
The methodology described here can be used by health care organizations to model the continuing outbreak of SARS-CoV-2 on their own populations. Such applications would require relatively minimal technical or modeling sophistication in comparison with alternative frameworks. We believe that this model framework could be useful for any health care system that needs to model the outbreak of SARS-CoV-2 in its population.
Model Impacts to Clinical Operations
The forecasts discussed in this paper have been used extensively by KPMAS to make decisions about resource allocation around the pandemic. Initially, our forecasts were used primarily to assess burndown rates of personal protective equipment and ensure that the system rationed this equipment effectively. In addition, clinical leadership was able to use our forecasts to determine hospital staffing needs and reschedule elective procedures when staff or bed availability was expected to run low.
Later in the pandemic, this forecast became a critical component of a secondary model that was used to forecast COVID-19–related sick leave among health care staff. These data were used in turn to plan allowable planned paid time off, overstaff in anticipation of callouts, and seek contract employees to fill predictable gaps as staff ran short.
Model Performance Over Time
Generally, after the first 2 weeks, our model fit well. However, our model failed to accurately model observed hospital admissions during our first 2 weeks of forecasts. These forecasts employ values for the contact rate (β) that are based on the first 2 weeks of modeling. These weeks predated strong local social distancing measures; thus, the primary basis for our estimate of the reproductive rate did not sufficiently account for social distancing.13-16
This highlights a general risk for any backward-looking modeling framework. These models assume that future system dynamics are consistent with the most recent trends in the system, which may not be true. In this case, our model failed to account for the introduction of strict social distancing measures and nonpharmaceutical interventions, including school closures and restrictions on public gatherings, dining, and more.14-16 We observed similar results in our forecasts at the beginning of the second wave in late June and of the third wave in October.
In each of these cases, we noticed that our forecasts tended to equalize within 1 to 2 weeks of obtaining data under the new conditions. Our forecasts on June 30, 2020, underestimated hospitalizations by 48.8%; however, our forecasts were much better the following week, when we obtained a MAPE of only 23.0% (Figure 2). Our results in the third wave mirrored this trend, when we obtained a MAPE of 47.1% in the week following October 27, but this dropped to only 14.5% in the week following November 3, 2020 (Figure 3).
Overall, once the system reached an approximate steady state in mid-April, our forecasts were predictive (Figures 1 through 3, Table 2, and eAppendix Figures 11 through 19).
Holiday Effects on Model Performance
We can also examine the effects of holidays on the quality of our forecasts. For this analysis, we take the holidays, as recognized by the US federal government, of Memorial Day (May 25, 2020), Independence Day (July 4, 2020), Labor Day (September 7, 2020), Columbus Day (October 12, 2020), Veterans Day (November 11, 2020), Thanksgiving Day (November 26, 2020), and Christmas Day (December 25, 2020). Our forecast did not consistently over- or underestimate hospital admissions during the weeks in which each of these federal holidays occurred (Table 2). Three of these weeks were overestimated by the model, and 4 were underestimated.
Parameter Estimates
In addition to producing high-quality forecasts, our model produced a set of posterior distributions for a variety of parameters of interest. These distributions agreed well with the prevailing literature in almost all cases, although our initial value of Reff (active during the first week of training) may have been too large.17-25 This is likely because the system was underdetermined early in the fitting process. Despite this, it should be noted that estimates of the fully uncontrolled Reff vary widely, and our value was well below estimates for regions like New York City.26,27
Our values for the controlled (with social distancing) reproductive rate, however, were far closer to the literature consensus.19,20,23 After our forecast dated April 7, we found that this rate varied by week. Generally, we found that the rate remained within bounds of approximately 0.7 and 1.0 during periods of extended retraction, but it surged into the 1.0 to 1.3 range during the second and third waves (Table 2). In comparison, other researchers estimated state-level reproductive rates to be approximately 1.2 (1.1-1.3) during the second wave in early July.17
When examining the clinical meaning of compartment models, we acknowledge that there is no strict, one-to-one adherence between the compartments with clinical values. With this caveat in mind, it is informative to examine how our estimates compare with clinical values derived from the literature. Observation of individuals has indicated that the typical period between exposure and symptom onset is approximately 6.4 days.28 Furthermore, individuals may begin viral shedding for 1 to 3 days prior to becoming symptomatic, and viral load appears to peak within the first week of symptoms.29-31 In comparison, our model indicated that exposed individuals may take a mean of 3.2 days to become infectious, and they may remain infectious for a mean of 3.6 days.
We estimated the period between self-quarantine and hospitalization to be approximately 8.6 days on average. This value has been estimated by various other studies to be anywhere from 5.0 days to 11.0 days.25,32,33 Also, other studies, including randomized controlled trials of patients with COVID-19 that began tracking shortly after infection, found a variety of hospitalization rates from 3.2% to 7.1% in their control groups.34-37 Our own estimate of 5.3% was largely consistent with these other estimates.
One outcome of great interest is the current attack rate of the virus, which is often detected using serology or antibody testing. Although there were few publicly available seroprevalence studies dating from the end of 2020 at the time of this writing, Bajema et al estimated seroprevalence of COVID-19 in Maryland at approximately 10.2%, in the District of Columbia at 6.5%, and in Virginia at 3.2% using samples taken from September 7 through September 24, 2020.38 When weighted by the membership distribution in each jurisdiction, this indicates a serological estimate of 7.5%, although KPMAS’s Virginia membership is primarily concentrated in Northern Virginia (and may therefore be underestimated by this measure). Our model forecast dated September 15, 2020, estimated an attack rate of 9.1%, with a 95% credible interval of 8.7% to 9.4%. We feel that this indicates broad agreement between the 2 measures, because subsequent studies have found that seroprevalence was significantly higher in Northern Virginia than in the state overall and because seroprevalence can underestimate overall attack rate because of rapid decay in antibody rates.39-41
Limitations
The chief limitation to the system that we have proposed is its reliance upon historical data and the inability to incorporate real-time information to update its Reff. In practice, we demonstrated that the model may take up to 3 weeks to incorporate such changes into its forecasts.
We suggest that any organization using this model incorporate institutional and real-world knowledge into the forecasting framework provided. To do this, such organizations need only add new transitions in the system contact rate at relevant times. For instance, we might add a break point within the last 3 weeks of fitting if the local region had reopened schools or relaxed social distancing measures. This limitation, although important, is universal to any retrospective model and should be considered as a limitation upon data-based forecasting efforts in general.
The models described in this study are also limited by the data available to us. We have employed only the membership of the KPMAS health care system for modeling. We assume that our population is representative of the mid-Atlantic region.
Conversely, our model assumes that the entire KPMAS membership is part of a single population for modeling purposes, which implies perfect social mixing. This assumption is not true because the members are spread over a wide geographic region. However, we did see that the model was able to effectively overcome this limitation. Our early (unpublished) experiments on subpopulations indicated that there was no forecasting accuracy to be gained by modeling these groups individually. In general, we strongly recommend that any users attempting to use this framework on a diverse population investigate whether the framework is more effective when applied to each subpopulation.
CONCLUSIONS
Using a genetic particle filtering methodology, we constructed a weekly rolling forecast of hospital admissions. When conditions remained stable, we found that this methodology produced highly accurate forecasts of hospital admissions. Furthermore, our model produced convincing posterior distributions of several key variables of interest in the COVID-19 literature.
Beyond the contributions of this study’s specific methodology to the health care management domain, we also have demonstrated the effectiveness of a new extension to the particle filter fitting technique—genetic particle filtering—on real data. To our knowledge, this constitutes the first use of this technique in a real-life epidemiological setting. When combined with our strong performance results, we believe that the advantages of this methodology (conceptual simplicity and computational efficiency) imply that it should be explored further in the epidemiological literature.
Author Affiliations: Kaiser Permanente Mid-Atlantic Permanente Medical Group (TJF, AC, MB, JM, FT, EW, MH), Rockville, MD; Kaiser Permanente Mid-Atlantic Permanente Research Institute (MB, EW, MH), Rockville, MD; Kaiser Foundation Health Plan of the Mid-Atlantic States (KC), Rockville, MD.
Source of Funding: This research is funded by the Mid-Atlantic Permanente Medical Group and the Kaiser Permanente Mid-Atlantic States Community Health Program.
Author Disclosures: Mr Teng is employed as a data analyst by the Mid-Atlantic Permanente Medical Group and received SARS-CoV-2 data from the Mid-Atlantic Permanente Medical Group data system. The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (TJF, AC, MB, JM, MH); acquisition of data (TJF, KC, FT, MH); analysis and interpretation of data (TJF, AC, KC, FT, MH); drafting of the manuscript (TJF, AC, EW); critical revision of the manuscript for important intellectual content (TJF, AC, MB, JM, EW, MH); statistical analysis (TJF, AC); administrative, technical, or logistic support (MB, KC, FT, EW, MH); and supervision (TJF, JM, MH).
Address Correspondence to: Tori J. Finch, MS, Kaiser Permanente Mid-Atlantic Permanente Medical Group, 2101 E Jefferson St, Rockville, MD 20852. Email: Tori.J.Finch@kp.org.
REFERENCES
1. Slotkin JR, Murphy K, Ryu J. How one health system is transforming in response to COVID-19. Harvard Business Review. June 11, 2020. Accessed February 3, 2022. https://hbr.org/2020/06/how-one-health-system-is-transforming-in-response-to-covid-19
2. COVID-19 policy briefings. Institute for Health Metrics and Evaluation. Accessed September 14, 2020. http://www.healthdata.org/covid/updates
3. Borchering RK, Viboud C, Howerton E, et al. Modeling of future COVID-19 cases, hospitalizations, and deaths, by vaccination rates and nonpharmaceutical intervention scenarios – United States, April-September 2021. MMWR Morb Mortal Wkly Rep. 2021;70(19):719-724. doi:10.15585/mmwr.mm7019e3
4. COVID-19: what’s new for May 4, 2020. Institute for Health Metrics and Evaluation. May 2020. Accessed September 14, 2020. http://www.healthdata.org/sites/default/files/files/Projects/COVID/Estimation_update_050420.pdf
5. COVIDScenarioPipeline. GitHub. March 2020. Accessed September 14, 2020. https://github.com/HopkinsIDD/COVIDScenarioPipeline
6. DELPHI: the epidemiological model underlying COVID analytics. GitHub. May 2020. Accessed September 14, 2020. https://github.com/COVIDAnalytics/DELPHI/graphs/code-frequency
7. Zou D, Wang L, Xu P, Chen J, Zhang W, Gu Q. Epidemic model guided machine learning for COVID-19 forecasts in the United States. medRxiv. Preprint posted online May 25, 2020. doi:10.1101/2020.05.24.20111989
8. Bayesian compartmental models for COVID-19. GitHub. April 2020. Accessed September 14, 2020. https://github.com/dsheldon/covid
9. COVID-19 projection in the US. GitHub. April 2020. Accessed September 14, 2020. https://github.com/shaman-lab/COVID-19Projection
10. Romero-Severson EO, Hengartner N, Meadors G, Ke R. Change in global transmission rates of COVID-19 through May 6 2020. PLoS One. 2020;15(8):e0236776. doi:10.1371/journal.pone.0236776
11. Fast facts. Kaiser Permanente. Accessed September 14, 2020. https://about.kaiserpermanente.org/who-we-are/fast-facts
12. Hespanhol L, Vallio CS, Costa LM, Saragiotto BT. Understanding and interpreting confidence and credible intervals around effect estimates. Braz J Phys Ther. 2019;23(4):290-301. doi:10.1016/j.bjpt.2018.12.006
13. Governor Hogan announces closure of all non-essential businesses, $175 million relief package for workers and small businesses affected by COVID-19. News release. The Office of Governor Larry Hogan; March 23, 2020. Accessed September 14, 2020. https://governor.maryland.gov/2020/03/23/governor-hogan-announces-closure-of-all-non-essential-businesses-175-million-relief-package-for-workers-and-small-businesses-affected-by-covid-19/
14. COVID-19 surveillance. Government of the District of Columbia. Accessed February 3, 2022. https://coronavirus.dc.gov/data
15. COVID-19 data in Virginia. Virginia Department of Health. Accessed February 3, 2022. https://www.vdh.virginia.gov/coronavirus/see-the-numbers/covid-19-in-virginia/
16. Ferguson N, Laydon D, Nedjati-Gilani G, et al. Report 9: impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand. Imperial College London. March 16, 2020. Accessed February 3, 2022. https://spiral.imperial.ac.uk/bitstream/10044/1/77482/14/2020-03-16-COVID19-Report-9.pdf
17. Abbott S, Hellewell J, Thompson R, et al. Temporal variation in transmission during the COVID-19 outbreak. epiforecasts.io. Accessed February 3, 2022. https://epiforecasts.io/covid/
18. Abbott S, Munday JD, Hellewell J, et al. Temporal variation in transmission during the COVID-19 outbreak in Italy. Centre for Mathematical Modelling of Infectious Diseases. March 19, 2020. Accessed September 14, 2020. https://cmmid.github.io/topics/covid19/reports/national-time-varying-transmission/italy.pdf
19. Hellewell J, Abbott S, Gimma A, et al; Centre for the Mathematical Modelling of Infectious Diseases COVID-19 Working Group. Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts. Lancet Glob Health. 2020;8(4):e488-e496. doi:10.1016/S2214-109X(20)30074-7
20. Lauer SA, Grantz KH, Bi Q, et al. The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. Ann Intern Med. 2020;172(9):577-582. doi:10.7326/M20-0504
21. Li R, Pei S, Chen B, et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2). Science. 2020;368(6490):489-493. doi:10.1126/science.abb3221
22. Verity R, Okell LC, Dorigatti I, et al. Estimates of the severity of coronavirus disease 2019: a model-based analysis. Lancet Infect Dis. 2020;20(6):669-677. doi:10.1016/S1473-3099(20)30243-7
23. Zhang J, Litvinova M, Liang Y, et al. Age profile of susceptibility, mixing, and social distancing shape the dynamics of the novel coronavirus disease 2019 outbreak in China. medRxiv. Preprint posted online March 20, 2020. doi:10.1101/2020.03.19.20039107
24. Zhou F, Yu T, Du R, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet. 2020;395(10229):1054-1062. doi:10.1016/S0140-6736(20)30566-3
25. Liu Y, Gayle AA, Wilder-Smith A, Rocklöv J. The reproductive number of COVID-19 is higher compared to SARS coronavirus. J Travel Med. 2020;27(2):taaa021. doi:10.1093/jtm/taaa021
26. Ives AR, Bozzuto C. State-by-state estimates of R0 at the start of COVID-19 outbreaks in the USA. medRxiv. Preprint posted online May 27, 2020. doi:10.1101/2020.05.17.20104653
27. Epiforecasts COVID-19. GitHub. June 2020. Accessed September 14, 2020. https://github.com/epiforecasts/covid-us-forecasts
28. Backer JA, Klinkenberg D, Wallinga J. Incubation period of 2019 novel coronavirus (2019-nCoV) infections among travellers from Wuhan, China, 20-28 January 2020. Euro Surveill. 2020;25(5):2000062. doi:10.2807/1560-7917.ES.2020.25.5.2000062
29. Wei WE, Li Z, Chiew CJ, Yong SE, Toh MP, Lee VJ. Presymptomatic transmission of SARS-CoV-2 — Singapore, January 23–March 16, 2020. MMWR Morb Mortal Wkly Rep. 2020;69(14):411-415. doi:10.15585/mmwr.mm6914e1
30. Pan Y, Zhang D, Yang P, Poon LLM, Wang Q. Viral load of SARS-CoV-2 in clinical samples. Lancet Infect Dis. 2020;20(4):411-412. doi:10.1016/S1473-3099(20)30113-4
31. Tindale LC, Stockdale JE, Coombe M, et al. Evidence for transmission of COVID-19 prior to symptom onset. eLife. 2020;9:e57149. doi:10.7554/eLife.57149
32. Huang C, Wang Y, Li X, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497-506. doi:10.1016/S0140-6736(20)30183-5
33. Cummings MJ, Baldwin MR, Abrams D, et al. Epidemiology, clinical course, and outcomes of critically ill adults with COVID-19 in New York City: a prospective cohort study. Lancet. 2020;395(10239):1763-1770. doi:10.1016/S0140-6736(20)31189-2
34. Mitjà O, Corbacho-Monné M, Ubals M, et al. Hydroxychloroquine for early treatment of adults with mild coronavirus disease 2019: a randomized, controlled trial. Clin Infect Dis. 2021;73(11):e4073-e4081. doi:10.1093/cid/ciaa1009
35. Skipper CP, Pastick KA, Engen NW, et al. Hydroxychloroquine in nonhospitalized adults with early COVID-19: a randomized trial. Ann Intern Med. 2020;173(8):623-631. doi:10.7326/M20-4207
36. Gottlieb PL, Nirula A, Chen P, et al. Effect of bamlanivimav as monotherapy or in combination with etesevimab on viral load in patients with mild to moderate COVID-19: a randomized clinical trial. JAMA. 2021;325(7):632-644. doi:10.1001/jama.2021.0202
37. Tardif JC, Bouabdallaoui N, L’Allier PL, et al. Efficacy of colchicine in non-hospitalized patients with COVID-19; COLCORONA Investigators. medRxiv. Preprint posted online January 27, 2021. doi:10.1101/2021.01.26.21250494
38. Bajema KP, Wiegand RE, Cuffe K, et al. Estimated SARS-CoV-2 seroprevalence in the US as of September 2020. JAMA Intern Med. 2021;181(4):450-460. doi:10.1001/jamainternmed.2020.7976
39. Rogawski McQuade ET, Guertin KA, Becker L, et al. Assessment of seroprevalence of SARS-CoV-2 and risk factors associated with COVID-19 infection among outpatients in Virginia. JAMA Netw Open. 2021;4(2):e2035234. doi:10.1001/jamanetworkopen.2020.35234
40. Ibarrondo FJ, Fulcher JA, Goodman-Meza D, et al. Rapid decay of anti-SARS-CoV-2 antibodies in persons with mild Covid-19. N Engl J Med. 2020;383(11):1085-1087. doi:10.1056/NEJMc2025179
41. Self WH, Tenforde MW, Stubblefield WB, et al; CDC COVID-19 Response Team; IVY Network. Decline in SARS-CoV-2 antibodies after mild infection among frontline health Care personnel in a multistate hospital network — 12 states, April – August 2020. MMWR Morb Mortal Wkly Rep. 2020;69(47):1762-1766. doi:10.15585/mmwr.mm6947a2
Advent of Ponatinib for Ph+ ALL Expected to Influence New Guidelines