Publication
Article
The American Journal of Managed Care
Author(s):
An updated emergency visit classification tool enables managers to make valid inferences about levels of appropriateness of emergency department utilization and healthcare needs within a population.
ABSTRACTObjectives: Analyses of emergency department (ED) use require visit classification algorithms based on administrative data. Our objectives were to present an expanded and revised version of an existing algorithm and to use this tool to characterize patterns of ED use across US hospitals and within a large sample of health plan enrollees.
Study Design: Observational study using National Hospital Ambulatory Medical Care Survey ED public use files and hospital billing data for a health plan cohort.
Methods: Our Johns Hopkins University (JHU) team classified many uncategorized diagnosis codes into existing New York University Emergency Department Algorithm (NYU-EDA) categories and added 3 severity levels to the injury category. We termed this new algorithm the NYU/JHU-EDA. We then compared visit distributions across these 2 algorithms and 2 other previous revised versions of the NYU-EDA using our 2 data sources.
Results: Applying the newly developed NYU/JHU-EDA, we classified 99% of visits. Based on our analyses, it is evident that an even greater number of US ED visits than categorized by the NYU-EDA are nonemergent. For the first time, we provide a more complete picture of the level of severity among patients treated for injuries within US hospital EDs, with about 86% of such visits being nonsevere. Also, both the original and updated classification tools suggest that, of the 38% of ED visits that are clinically emergent, the majority either do not require ED resources or could have been avoided with better primary care.
Conclusions: The updated NYU/JHU-EDA taxonomy appears to offer cogent retrospective inferences about population-level ED utilization.
Am J Manag Care. 2020;26(3):119-125. https://doi.org/10.37765/ajmc.2020.42636
Takeaway Points
The New York University Emergency Department Algorithm (NYU-EDA) is widely used to classify emergency department (ED) visits.1,2 This measurement tool’s development occurred in the late 1990s and was based on 5700 ED discharge abstracts from 6 hospitals in the Bronx, a borough of New York City. The NYU-EDA probabilistically classified 659 diagnosis codes from the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). The original NYU-EDA mapped only about 5% of all ICD-9-CM diagnosis codes. We propose an algorithm that remedies this shortfall and classifies nearly all ED visits.
The NYU-EDA has been applied in health services research studies to identify emergent visits that required ED care.3,4 Several studies have focused on nonemergent and primary care—treatable ED visits and evaluated emergent and nonemergent utilization patterns to assess the impact of healthcare reforms.5-11 Estimates of proportions of nonemergent visits have ranged between 17% and 49%. One study looked at primary care—sensitive (PCS) visits (ie, emergent visits that are potentially avoidable and nonemergent and primary care–treatable visits) and found that up to 50% of ED visits were PCS in a statewide all-payer claims database with 92% of ED visits classified.12
Evidence for the validity of the NYU-EDA has grown over 2 decades. Emergent visits were associated with total charges and increased likelihood of death and inpatient hospitalization directly from the ED and within 30 days from a previous visit.10,13-15 However, researchers and emergency medicine clinicians have cautioned against using visit classifications based solely on discharge diagnoses for interventions aimed at reducing unnecessary visits or for denying payment. First, underlying differences in morbidity and access to care may, to some degree, account for utilization patterns that would be detected by an ED visit classification algorithm. Second, there are reasons for visits on the individual level that may be appropriate for ED utilization, which can differ from discharge diagnoses that categorize the encounter as nonemergent. For example, patients who are experiencing chest pain and come to the ED for evaluation are not necessarily inappropriately using the ED. ED visit classifications are useful tools for understanding the healthcare needs of populations, not the medical needs of individual patients.16-19
A team of Johns Hopkins University (JHU) emergency medicine physicians and health services researchers has further updated and expanded the NYU-EDA using their best clinical judgment and diagnosis aggregations from the Adjusted Clinical Groups (ACG) System.20 In this revised JHU version of the NYU-EDA (or NYU/JHU-EDA for short) we undertook 3 significant modifications and improvements to the original version and updates undertaken by other teams. First, rather than assigning ICD codes probabilistically, we classify each ED visit into 1 of 11 categories. Second, rather than placing all injuries into 1 category, we subcategorize injuries into 3 severity levels: nonsevere injuries, severe injuries, and severe injuries that are likely to require inpatient admissions. Third, we significantly expand the classification of ICD codes.
In this article, we describe the updated NYU/JHU-EDA, and, using data from a federal survey of US hospital EDs and a large claims database from multiple health plans, we compare results of our revised tool with the original NYU-EDA and 2 earlier modifications developed by Johnston et al and Ballard et al.2,13
The first objective of this article is to offer a description and first-stage assessment of our ED classification algorithm. The second goal is to use this methodology to offer an account of use patterns of American EDs based on a representative sample of patients visiting hospital EDs and a large national sample of health plan enrollees. In addition to describing our new measurement tool, our analysis adds to the literature on how Americans use EDs and will offer insights into how health plans and other organizations might use classification algorithms to gain an understanding of how populations make use of hospital EDs.
Review of Previous Approaches for Classifying ED Visits
The original NYU-EDA first classifies common primary ED discharge diagnoses as having varying probabilities of falling into each of the 4 following categories: (1) nonemergent; (2) emergent, primary care treatable; (3) emergent, ED care needed, and preventable or avoidable with timely and effective ambulatory care; and (4) emergent, ED care needed, and not preventable.1 The original NYU system categorizes certain diagnoses separately and directly into 5 additional categories: injuries, psychiatric conditions, alcohol related, drug related, or unclassified.
The adaptation by Ballard et al sums the NYU-EDA probabilities for nonemergent and emergent primary care—treatable visits and compares this sum with the total probability of the emergent, ED care needed categories.13 Depending on the larger of the 2 resultant likelihoods, visits are classified as nonemergent or emergent, or as intermediate when there is an equal probability of being nonemergent or emergent. The Ballard et al method classifies visits into 1 of 8 categories, which have been shown to be a good predictor of subsequent hospitalization and death within 30 days of an ED visit.13
After 2001, the NYU-EDA was not updated, and newly added diagnosis codes were not classified. In 2018, Johnston and colleagues identified new codes that are “nested” within previously classified diagnoses and applied the original probabilistic weights to these codes.2 New diagnoses that remained unclassified were “bridged” to already weighted codes using ICD-based condition groupings from the Agency for Healthcare Research and Quality Clinical Classification System.21 Instances in which a new diagnosis mapped to several codes with different weights were resolved in favor of a code most likely to represent an unavoidable emergent visit.2 Because assigned weights sum to 1, both the original NYU-EDA and the update by Johnston et al describe a collection of ED visits by averaging weights.
We took a different approach to update, enhance, and expand the NYU-EDA method. We did not use probabilities but rather assigned primary discharge diagnoses to single classes and uniquely classified each ED visit. For codes that had been previously included in the original NYU-EDA, we based our updated assignments on the category with the highest probability. We resolved cases of equally high probabilities among multiple categories by giving preference to the emergent, ED care needed category.
METHODS
Data Sources
To build our revised methodology, we combined ED encounter data from the National Hospital Ambulatory Medical Care Survey (NHAMCS) from the period of 2009 to 2013. We used data from the 2014 survey for validation.22 The reason for an ED visit is present in NHAMCS data and contained mainly signs and symptom diagnoses, but it was not present in claims. We used discharge diagnoses only and thus retained a key characteristic of the NYU-EDA and previous revised versions. Second, we extracted hospital ED claims from a large health insurance plan database. We obtained health plan claims data from QuintilesIMS (Plymouth Meeting, Pennsylvania [on November 6, 2017, the name of the organization changed to IQVIA]). The claims extract spanned the same time period (2009-2013) and included 14 commercial health plans; 6 of these plans also had Medicaid and Medicare managed care enrollees. The database consisted of patient enrollment data, ICD-9-CM diagnoses, hospital revenue center codes, procedures coded with Current Procedural Terminology (CPT), and plan-allowed amounts for medical services.
Following the literature, we identified ED visits in the claims database through the presence of revenue center codes (0450-0459, 0981) and CPT codes for evaluation and management (EM) services in the ED (99281-99285).23,24 We resolved instances in which facility and professional claims indicated different primary diagnoses by prioritizing facility bills. Our rationale for giving diagnoses on facility bills priority over professional bills for the same visit is that facility bills relate more closely to the final discharge record, whereas some professional claims may contain preliminary diagnoses.
Development of the NYU/JHU EDA
We applied the Johns Hopkins ACG system to help categorize diagnoses that were not included in the original NYU-EDA method. The system assigns diagnoses found in claims or encounter data to 1 of 32 Aggregated Diagnosis Groups (ADGs) (ie, morbidity types with similar expected need for healthcare resources).25,26 The ACG system also maps diagnoses to 1 of 282 Expanded Diagnosis Clusters (EDCs) (ie, clinically homogeneous groups of diagnoses).
To help expand the scope of the NYU-EDA visit classification to more diagnoses, we formed “clinical classification cells” for ICD codes falling within combinations of ADGs and EDC clusters. Each unique cell was reviewed and categorized by our clinician team of 3 practicing emergency physicians (K.P., D.M.R, and Dr Alan Hsu).
For classification cells with ICD codes that were not previously classified with NYU-assigned probabilities, 2 of our clinicians independently assigned an ED visit class. After differences among approximately 35% of all manual assignments were reconciled and finalized by the third clinician, we developed majority class assignments for the remaining diagnoses within classification cells. Relatively uncommon diagnoses within cells that contained only codes without any original NYU-EDA weights or any manually assigned diagnosis remained unclassified. Using this approach, our NYU/JHU-EDA currently classifies 10,723 ICD-9-CM and 74,329 International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes (eAppendix Tables 1 and 2 [eAppendix available at ajmc.com] provide examples of common ICD codes in each NYU/JHU-EDA category).
To help us assign severity levels to ACG-based classification cells consisting of injury-related diagnoses, we used CPT codes for EM services associated with ED visits and further assessed whether visits resulted in an inpatient admission. EM codes classify severity from minor (99281) to high with immediate threat to life or physiologic function (99285). We counted the number of nonsevere injury visits (99281-99283) and the number of severe injury visits (99284, 99285) in our health insurance claims data sets. Based on the larger of the 2 counts, each of our injury diagnosis clusters was classified as being either nonsevere or severe. A subset of severe injury visits was identified as likely to require inpatient hospitalization based on a greater than 50% likelihood of cases being admitted. All injury ICD clusters that were so assigned into 1 of 3 severity levels underwent a final clinical review by our clinician team. A graphic overview of the final classification categories of our revised NYU/JHU-EDA grouping taxonomy is presented in the Figure.
Statistical Analysis
We conducted pairwise comparisons of visit distributions among the 4 EDA versions applied to NHAMCS and health plan data and computed Cramér’s V measure of association. Cramér’s V is a number between 0 and 1 that indicates how strongly 2 categorical variables are associated. It is based on Pearson’s χ2 statistic and computed as follows:
where n is the grand total of ED visits, c is the number of columns in a classification table, and r is the number of visit classes. The P value for the significance of V is the same as for Pearson’s χ2 statistic.
V values greater than 0.30 suggest that 2 EDAs produce dissimilar distributions across visit categories. We label 2 EDA visit distributions as similar for V values in the range from 0.1 to 0.3, and values less than 0.10 indicate strong similarity between 2 EDAs.27 We performed our analyses in SAS 9.4 (SAS Institute; Cary, North Carolina).
RESULTS
The nationally representative NHAMCS data set included 151,453 ED visits that occurred in a sample of US acute care hospitals from 2009 to 2013. Characteristics of patient visits are shown in the left column of Table 1. Using our health plan database, we identified 10,453,465 ED visits made by 4,404,608 insured individuals. As summarized in the right column of Table 1, 65% of visits in this insured sample were billed to commercial (nongovernmental) insurers compared with 29% in NHAMCS.
Table 2 compares our new classification with the original NYU methods and 2 other earlier revisions. This table indicates that the proportion of nonemergent visits in NHAMCS ranged from about 20% for the NYU-EDA and Johnston methods to 44% for the Ballard method, which combined nonemergent visits and visits that are emergent and primary care treatable into the single nonemergent category. Using our NYU/JHU-EDA classification methodology, the proportion of nonemergent visits was 34% for the 2009-2013 period and 36% in “reserved” 2014 validation data (eAppendix Table 3).
Our revised classification has a higher rate of nonemergent visits than the original NYU method and the closely aligned Johnston update, although the rate is lower than the modified approach by Ballard et al. Our method’s increased assignment into this category is due both to the large number of previously unclassified diagnosis codes placed here and to the decrease in the percentage in the ED needed but preventable emergent category due to our class assignment, rather than the probabilistic approach used by the original NYU and Johnston methodologies. Statistical analysis indicated similar visit distributions for the NYU-EDA and Johnston methodologies and differences between these 2 methods and Ballard (eAppendix Table 4).
In the remainder of the results, we make use of our NYU/JHU-EDA both to present results that will help validate the new tool and to more fully characterize the patterns of ED use within each of the 2 large study databases. Table 3 indicates similarities and differences among the distributions of visit classifications across 4 payer types within the federal survey of US hospital EDs. We find a lower emergent visit rate for Medicaid and self-pay patients and a greater emergent visit rate for Medicare patients. Also, we find a comparatively higher injury rate, of mostly nonsevere nature, for commercially insured patients.
To assess the level of resource use within each classification level, we use our health plan data to present the average cost of the ED visit and the proportion admitted as inpatients. Table 4 shows the average total visit cost, defined as payment, and inpatient admission percentages for all classes. The resource use in most classes is as might be expected, with nonemergent and primary care—treatable emergent visits and nonsevere injuries having less resource use than other categories. It is also interesting to note that the less common psychiatric, alcohol and substance use, and unclassified categories were expensive and had a high rate of hospitalization.
DISCUSSION
There is renewed interest in understanding ED use patterns in populations, both because of increased use associated with the access-enhancing Affordable Care Act (ACA) and also as private payers seek to stem their rapidly rising ED spending.28 To assess acuity of ED use at the population level, validated classification algorithms that can use available secondary administrative data will be required. Although the original NYU-EDA and its previous versions have a demonstrated ability to differentiate among visits, the algorithm required updating to be sufficiently sensitive to new ICD codes and changes in healthcare utilization.
A team of emergency medicine physicians and health services researchers has further updated and expanded the NYU-EDA using clinical judgment and diagnosis classification tools from the Johns Hopkins Adjusted Clinical Groups system. In the NYU/JHU-EDA updated version, each visit is classified into only 1 of 11 categories, and unlike previous versions of the NYU-EDA, injuries are categorized into 3 severity levels. Also, by assigning each ED visit into a single category, rather than assigning probabilities related to multiple categories, the algorithm is likely to be more practical to managers and analysts wishing to assess acuity or need for emergency services across their population or organization. Additionally, the updated algorithm classifies 99% of visits, a significantly higher percentage than previous versions.
We used publicly available NHAMCS ED discharge data to categorize national visit distributions based on 3 previous versions of the NYU algorithm—the original NYU-EDA and the revisions of Johnston et al and Ballard et al—as well as our newly developed revision. In comparing the resulting output, it is relevant to remember that Johnston et al provide a “patch” for the NYU-EDA, and the algorithm of Ballard et al combines nonemergent visits with visits that are emergent and primary care treatable to form a single nonemergent category. Our comparative analysis indicates that the ED visit distributions from the NYU-EDA and Johnston algorithms are more similar to each other than to the profile based on the Ballard method. The visit distribution from the NYU/JHU-EDA likewise shows more similarity to Johnston and the original NYU-EDA method than to Ballard’s revision. Differences between the NYU/JHU-EDA and original NYU-EDA and Johnston visit distributions were largest among nonemergent and unclassified categories. Furthermore, the NYU/JHU-EDA method exhibits plausible differences among visit distribution profiles within different types of payers and the uninsured. A health plan cohort provided validation of the NYU/JHU-EDA in terms of costs of visits and inpatient admission from the ED.
The intended use of the NYU/JHU-EDA is to enable inferences about access to primary care by studying patterns of ED use. In this context, some researchers and analysts use the related concept of PCS visits, which combines the 3 categories of emergent visits that require ED resources but are potentially avoidable with ambulatory care, emergent and primary care—treatable visits, and nonemergent visits. Applying the PCS concept using the NYU/JHU-EDA method, we find that the proportion of PCS visits is 58% in our 2 data sets compared with 50% in an earlier study.12
This article adds a revised NYU ED visit classification algorithm to the literature and confirms findings of previous studies that have used related algorithms to study utilization of ED services in populations. Our findings suggest that the NYU/JHU-EDA is potentially suitable as a tool for analyses of ED utilization in various population-oriented contexts, including an assessment of the impact of Medicaid expansion on ED use following the implementation of the ACA.
Limitations
Our study has several limitations. First, similar to previous versions, our algorithm is not a gold standard for classifying individual ED visits. Our emergency medicine clinicians reviewed only a subset of ICD diagnoses using their clinical judgment and preexisting assignments to adjacent codes. To expand the scope of the NYU/JHU-EDA to a larger number of diagnoses, we made visit category assignments based on greatest likelihoods within clusters of ICD codes describing closely related diagnoses. Second, the NYU/JHU-EDA uses primary discharge diagnoses only, which are often proxied by first diagnoses on health plan claims, and thus tends to focus on severe conditions, not other co-occurring conditions that could affect visit acuity. Third, our large sample of managed care health plan members was not fully representative of the US population. Fourth, our study involved a small panel of practicing emergency medicine doctors and assignments remained largely subjective. Fifth, we created an ICD-10-CM version of the NYU/JHU-EDA with the help of General Equivalence Mappings from CMS.29 General Equivalence Mappings enabled us to transition our work on ICD-9-CM codes to ICD-10-CM, and additional clinical review will be needed to remove any inconsistencies regarding visit class assignments. Although our new methodology uses ICD-10-CM codes, further validation will be needed with recent data sets.
CONCLUSIONS
The ED visit classification tools in this study are intended for retrospective analysis of populations. Users of these tools should be mindful that the original NYU-EDA—and by extension, our revision—“was designed as a tool for health services researchers to make inferences about access to primary care by studying patterns of [ED] use.”17 We believe the results for our updated version support the conclusion that, like the original version and previous revisions, the NYU/JHU-EDA will also be a useful tool that will enable researchers, managers, and policy makers to make cogent inferences about the levels of acuity of ED utilization and healthcare needs within a population.
Acknowledgments
The authors thank Dr Alan Hsu for his help in classifying ICD diagnosis codes for this project.Author Affiliations: Center for Population Health Information Technology, Department of Health Policy and Management (KWL, JPW), and Department of International Health (KP), The Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD; Department of Emergency Medicine, The Johns Hopkins University School of Medicine (KP, DMR), Baltimore, MD.
Source of Funding: This manuscript has been prepared by faculty and staff at the Johns Hopkins University. The manuscript references the Adjusted Clinical Groups (ACG) system. The Johns Hopkins University holds the copyright to the ACG system and receives royalties from the global distribution of the ACG system. Two of the authors (Drs Lemke and Weiner) are members of a group of researchers who develop and maintain the ACG system with support from the Johns Hopkins University.
Author Disclosures: Drs Lemke, Pham, and Weiner are employed by Johns Hopkins University, which receives royalties from the ACG system. Dr Pham has received 2 bonuses of $2000 each for working development of the NYU/JHU-EDA classification system for the ACGs but entered into this collaboration without any other funding support or expectation of support. Dr Ravert reports no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (KWL, KP, DMR, JPW); acquisition of data (KWL, JPW); analysis and interpretation of data (KWL, KP, DMR, JPW); drafting of the manuscript (KWL, KP, DMR); critical revision of the manuscript for important intellectual content (KWL, KP, JPW); statistical analysis (KWL); obtaining funding (JPW); administrative, technical, or logistic support (JPW); and supervision (JPW).
Address Correspondence to: Klaus W. Lemke, PhD, Center for Population Health Information Technology, Department of Health Policy and Management, The Johns Hopkins University Bloomberg School of Public Health, 624 N Broadway, Room 601, Baltimore, MD 21205. Email: klemke1@jhu.edu.REFERENCES
1. Billings J, Parikh N, Mijanovich T. Emergency department use: the New York story. Issue Brief (Commonw Fund). 2000;(434):1-12.
2. Johnston KJ, Allen L, Melanson TA, Pitts SR. A “patch” to the NYU Emergency Department Visit Algorithm. Health Serv Res. 2017;52(4):1264-1276. doi: 10.1111/1475-6773.12638.
3. Wharam JF, Landon BE, Galbraith AA, Kleinman KP, Soumerai SB, Ross-Degnan D. Emergency department use and subsequent hospitalizations among members of a high-deductible plan [erratum in JAMA. 2008;299(2):171]. JAMA. 2007;297(10):1093-1102. doi: 10.1001/jama.297.10.1093.
4. Kaskie B, Obrizan M, Cook EA, et al. Defining emergency department episodes by severity and intensity: a 15-year study of Medicare beneficiaries. BMC Health Serv Res. 2010;10:173. doi: 10.1186/1472-6963-10-173.
5. Powell MP, Yu X, Isehunwa O, Chang CF. Trends in urgency of emergency department visits among those with and without multiple chronic conditions, 2007-2012. J Hosp Med Manage. 2016;2:2. doi: 10.4172/2471-9781.100019.
6. Purkudpol P, Wiler JL, Hsia RY, Ginde AA. Association of Medicare and Medicaid insurance with increasing primary care—treatable emergency department visits in the United States. Acad Emerg Med. 2014;21(10):1135-1142. doi: 10.1111/acem.12490.
7. Raven MC, Lowe RA, Maselli J, Hsia R. Comparison of presenting complaint vs discharge diagnosis for identifying “nonemergency” emergency department visits. JAMA. 2013;309(11):1145-1153. doi: 10.1001/jama.2013.1948.
8. Akosa Antwi Y, Moriya AS, Simon K, Sommers BD. Changes in emergency department use among young adults after the Patient Protection and Affordable Care Act’s dependent coverage provision. Ann Emerg Med. 2015;65(6):664-672.e2. doi: 10.1016/j.annemergmed.2015.01.010.
9. Gandhi SO, Grant LP, Sabik LM. Trends in nonemergent use of emergency departments by health insurance status. Med Care Res Rev. 2014;71(5):496-521. doi: 10.1177/1077558714541481.
10. Miller S. The effect of insurance on emergency room visits: an analysis of the 2006 Massachusetts health reform. J Public Econ. 2012;96(11-12):893-908. doi: 10.1016/j.jpubeco.2012.07.004.
11. Taubman S, Allen HL, Wright BJ, Baicker K, Finkelstein AN. Medicaid increases emergency department use: evidence from Oregon’s health insurance experiment. Science. 2014;343(6168):263-268. doi: 10.1126/science.1246183.
12. Lines LM, Li NC, Mick EO, Ash AS. Emergency department and primary care use in Massachusetts 5 years after health reform. Med Care. 2019;57(2):101-108. doi: 10.1097/MLR.0000000000001025.
13. Ballard D, Price M, Fung V, et al. Validation of an algorithm for categorizing the severity of hospital emergency department visits. Med Care. 2010;48(1):58-63. doi: 10.1097/MLR.0b013e3181bd49ad.
14. Chen BK, Cheng X, Bennett K, Hibbert J. Travel distances, socioeconomic characteristics, and health disparities in nonurgent and frequent use of hospital emergency departments in South Carolina: a population-based observational study. BMC Health Serv Res. 2015;15:203. doi: 10.1186/s12913-015-0864-6.
15. Gandhi SO, Sabik L. Emergency department visit classification using the NYU Algorithm. Am J Manag Care. 2014;20(4):315-320.
16. Jones K, Paxton H, Hagtvedt R, Etchason J. An analysis of the New York University Emergency Department Algorithm’s suitability for use in gauging changes in ED usage patterns. Med Care. 2013;51(7):e41-e50. doi: 10.1097/MLR.0b013e318242315b.
17. Lowe RA. Updating the emergency department algorithm: one patch is not enough. Health Serv Res. 2017;52(4):1257-1263. doi: 10.1111/1475-6773.12735.
18. Chou SC, Gondi S, Baker O, Venkatesh AK, Schuur JD. Analysis of a commercial insurance policy to deny coverage for emergency department visits with nonemergent diagnoses. JAMA Netw Open. 2018;1(6):e183731. doi: 10.1001/jamanetworkopen.2018.3731.
19. Kellerman AL, Weinick RM. Emergency departments, Medicaid costs, and access to primary care—understanding the link. N Engl J Med. 2012;366(23):2141-2143. doi: 10.1056/NEJMp1203247.
20. Weiner JP, Starfield BH, Steinwachs DM, Mumford LM. Development and application of a population-oriented measure of ambulatory care case-mix. Med Care. 1991;29(5):452-472. doi: 10.1097/00005650-199105000-00006.
21. HCUP tools & software. Healthcare Cost and Utilization Project website. hcup-us.ahrq.gov/tools_software.jsp. Accessed February 4, 2019.
22. Scope and sample design. CDC website. cdc.gov/nchs/ahcd/ahcd_scope.htm. Accessed February 4, 2019.
23. Jeffery MM, Bellolio MF, Wolfson J, Abraham JM, Dowd BE, Kane RL. Validation of an algorithm to determine the primary care treatability of emergency department visits. BMJ Open. 2016;6(8):e011739. doi: 10.1136/bmjopen-2016-011739.
24. Venkatesh AK, Mei H, Kocher KE, et al. Identification of emergency department visits in Medicare administrative claims: approaches and implications. Acad Emerg Med. 2016;24(4):422-431. doi: 10.1111/acem.13140.
25. Starfield B, Weiner JP, Mumford L, Steinwachs D. Ambulatory care groups: a categorization of diagnoses for research and management. Health Serv Res. 1991;26(1):53-74.
26. The Johns Hopkins ACG system website. hopkinsacg.org. Accessed February 4, 2019.
27. Crewson PE. Applied Statistics. Morrisville, NC: Lulu Press; 2015.
28. Health, United States, 2012: With Special Feature on Emergency Care. Hyattsville, MD: National Center for Health Statistics; 2013. cdc.gov/nchs/data/hus/hus12.pdf. Accessed February 4, 2019.
29. Official CMS industry resources for the ICD-10 transition. CMS website. cms.gov/Medicare/Coding/ICD10/index.html. Updated November 1, 2019. Accessed December 2, 2019.