The American Journal of Managed Care
September 2014
Volume 20
Issue 9

Predicting High-Need Cases Among New Medicaid Enrollees

Self-reported health measures embedded in a Medicaid application can comprise a predictive model identifying new and returning enrollees at risk of high healthcare utilization.



To assess the ability of a short, self-reported health needs assessment (HNA) collected at the time of Medicaid enrollment to predict subsequent utilization and costs.

Study Design

Retrospective cohort study.


We analyzed individual-level data that included self-reported HNAs, medical care encounter records, and administrative eligibility records for 34,087 childless adult Medicaid enrollees in Wisconsin, covering the period 2009-2010. High need was operationalized using the following outcome variables measured over the first year of program enrollment: having an inpatient admission; membership in the top decile of emergency department (ED) utilization; and membership in the top cost decile. We assessed the ability of the HNA to predict high-need cases using several complementary methods: the C-statistic; integrated discrimination improvement; and sensitivity, specificity, and positive predictive value resulting from multivariate logistic regression estimates.


Using the HNA along with sociodemographic measures met the Hosmer-Lemeshow criterion for adequate predictive performance for the high ED and high cost outcomes (C-statistics of 0.74 and 0.72, respectively). The HNA was associated with large improvements in predictive performance over sociodemographic measures alone for all 3 dependent variables (integrated discrimination improvement of 182%, 413%, and 300% for ED, cost, and inpatient variables, respectively). The HNA also led to considerable improvements in sensitivity and positive predictive value with no resulting decreases in specificity or negative predictive value.


Collecting self-reported health measures for a Medicaid expansion population can yield data of sufficient quality for predicting high-need cases.

Am J Manag Care. 2014;20(9):e399-e407

Take-Away Points

Predictive models of new enrollees at risk for high healthcare utilization were developed using data from a self-reported health needs assessment (HNA) administered as part of the Medicaid application.

• Self-reported HNA data can be used successfully by Medicaid agencies to prospectively classify individuals by risk of high healthcare utilization.

• Self-reported HNA data have promise for building predictive models for new and returning Medicaid populations about whom the program lacks recent utilization history.

• As large numbers of individuals lacking insurance histories enter Medicaid under the Affordable Care Act, states will need to develop such models.

Medicaid programs provide care to a population with widely varying healthcare needs. Because of these variations, appreciable benefits accrue from the ability to prospectively stratify patients into clinically distinct subgroups. Related applications, including targeted case management and the establishment of riskadjusted performance benchmarks for providers, are key tools in efforts to transform Medicaid into an outcomesfocused payer.1,2

While states differ in the extent to which they employ such techniques for their Medicaid programs,3 they all share the key constraint of lacking information on prior medical history for new enrollees, including the large expansion populations enrolled under the Affordable Care Act. Moreover,Medicaid enrollment is characterized by high levels of churn in coverage status,4,5 further complicating the challenge Medicaid agencies face in garnering recent medical histories of their members. For both new and returning program applicants, self-reported health measures collected at the time of enrollment may be the only practical means of gathering such data. To date, there is minimal evidence regarding whether states’ enrollment systems are capable of meeting the data collection task and whether the resulting data are of sufficient quality to be used for predicting highneed cases.

A recent Medicaid expansion in Wisconsin provides a unique opportunity to assess whether self-reported health measures gathered from an existing Medicaid enrollment system can provide clinically meaningful information. Wisconsin’s Medicaid program, in expanding managed care coverage to childless adults in 2009, required that applicants complete a self-reported health needs assessment (HNA) in addition to providing the sociodemographic information typically required for program enrollment.6 Our study uses administrative data from this expansion population to assess the predictive value of collecting self-reported health measures at the time of application—a novel use of Medicaid enrollment systems. To our knowledge, this is the first paper to explore the promise of using Medicaid enrollment systems data in this capacity.

1. HNA data considerably improve the ability to predict utilization and costs incurred over the first year of Medicaid enrollment, relative to the predictive performance of sociodemographic data typically collected by Medicaid agencies at the time of application;

2. A prediction tool comprising a combination of HNA and sociodemographic measures meets accepted thresholds of predictive ability for utilization and cost outcomes.

Our paper tests the following 2 hypotheses:

Assessing the predictive ability of the HNA data provides an instructive case study for other states’ Medicaid agencies, as limited empirical evidence exists regarding the predictive capacity of self-reported health measures among Medicaid members. We hypothesize that selfreported health measures are meaningfully predictive of high resource utilization among Medicaid members, in keeping with the related literature demonstrating the appreciable predictive ability of self-reported HNA instruments among populations served by Medicare and the Department of Veterans Affairs (VA).7,8

Medicaid programs nationwide have considerable experience using claims and/or encounter data for a variety of actuarial and quality measurement purposes.9 In contrast, Medicaid agencies lack experience collecting self-reported health data as part of the Medicaid application process. The potential relative benefits of this mode of data collection are large, as the marginal cost of collecting health data at enrollment is appreciably lower than fielding a population-based survey or establishing and maintaining an encounter database suitable for analytic purposes. However, there is great concern about and little evidence regarding the quality of the resulting self-reported data. Poor health status and/or poor literacy may potentially preclude enrollees from accurate reporting.10 Moreover, despite Medicaid agencies making explicit promises to the contrary, enrollees may fear that their answers could affect their eligibility for certain services.11 The presence of these and other unknown (and potentially unknowable) data quality threats demands a careful empirical examination of whether an enrollment-based data collection technique can indeed generate health-related information of sufficient caliber for programmatic purposes.


Data and Sample

Data from 2 state administrative systems were merged to construct the sample: the Client Assistance for Re-employment and Economic Support System (CARES), which stores all social program applications, and InterChange, which warehouses all claims and encounter data for Wisconsin Medicaid members. The study sample was drawn from the 48,460 enrollees who applied for the waiver program between its launch in July 2009 and the subsequent imposition of an enrollment freeze in October 2009, and who were enrolled in coverage for at least 1 year.

eAppendix Table

While the Department of Health Services (DHS) had initially intended that all waiver enrollees complete an HNA, logistical constraints precluded their universal administration. As such, the analytic sample was limited to the 34,087 members who completed an HNA at the time of enrollment. These members comprised 70% of the relevant population entering the program during the study period. DHS agency officials have shared with us that in some months case workers processing phone applications had to sacrifice HNA completion in favor of expediency, given the unanticipated magnitude of applicants (conversation with Linda McCart, director, DHS Policy and Research Section, July 2012). Members with and without HNA information have similar racial and ethnic backgrounds, but differ with respect to age and sex, with HNA respondents being older and disproportionately female (). While the HNA completion rate was not universal, it compares favorably to that achieved by a similar pilot study assessing the predictive ability of a self-reported health screener collected on a VA population,8 which had a coverage rate of roughly 40%.


Emergency department (ED) visits and inpatient utilization were chosen as the primary outcomes of interest, as both of these types of care have long been the focus of Medicaid case management efforts12 and subsets of both (eg, ambulatory sensitive ED visits and hospital readmissions) are widely recognized as potential healthcare performance indicators.13,14 Accordingly, they are also the most commonly considered utilization outcomes for predictive modeling applications in Medicaid.11 Medicaid case management programs often seek to target the highest-cost cases15; as such, we examined the incurrence of high costs as an additional outcome of interest. We operationalized the dependent variables by creating the following 3 binary indicators measured over a member’s first year of Medicaid enrollment: membership in the top decile of ED utilization, which reflects having 3 or more ED visits; having at least 1 inpatient hospitalization (similar to a top decile measure, as 9.2% of the sample experienced an inpatient event); and membership in the top cost decile, which represents costs of at least $6360.


Table 1

eAppendix Figure

We estimated the predictive ability of 7 different sets of predictors, the first of which consisted a standard set of sociodemographic variables currently collected by Medicaid enrollment systems (see for the complete list). Each of the remaining blocks of predictors included both the sociodemographic variables and additional variables drawn from the HNA (see for details on exact wording and progression of HNA measures). The second set of predictors included sociodemographics plus dummy variables reflecting the presence of the following conditions enumerated in the HNA: asthma; cancer; chronic obstructive pulmonary disease; depression; diabetes; emphysema; heart problems; high blood pressure; other mental health condition; and stroke. The third set included sociodemographics plus self-reported measures of behavior captured in the HNA: an indicator reflecting smoking status and an indicator reflecting problem alcohol or other drug use. The fourth set was sociodemographics plus a dummy variable reflecting high prescription drug use, measured as using 5 or more prescription drugs. Access to care indicators that reflected having a regular doctor and a regular clinic comprised, along with sociodemographics, the fifth set; sociodemographics plus a measure representing the previous year’s utilization, operationalized as having experienced an ED visit or hospitalization for one of the HNA-enumerated conditions, comprised the sixth. The seventh set of predictors was the entire vector of HNA measures (conditions + behavior + prescriptions + access to care + previous year’s utilization) plus sociodemographics.

Statistical Analysis

A series of logistic regression models, corresponding to the 7 blocks of predictors described above, was fitted for each outcome. Thus, the baseline model included only the sociodemographic measures, and each subsequent model included the addition of a subset of (or, in the case of the final specification, the entire set of) HNA measures. For each of the HNA specifications, we tested the incremental predictive ability of the HNA measures over that of the baseline demographic model.

We used 3 measures of predictive ability to assess the efficacy of the self-reported HNA measures. First, predictive ability was assessed using the C-statistic, the most commonly reported measure of model discrimination in the related literature.16-18 For a dichotomous outcome, it is identical to the area under the receiver operating curve, a plot of the sensitivity (true positive rate) against 1 — specificity (false positive rate) across the entire range of possible predicted probability thresholds. The C-statistic ranges between 0.5 and 1, with a value of 0.5 reflecting predictive ability no more accurately than a coin flip and a value of 1 reflecting perfect predictive ability. A rule of thumb suggested by Hosmer and Lemeshow19 and widely adopted in the clinical literature is as follows: C-statistics greater than or equal to 0.7 are considered acceptable and values greater than 0.8 are considered excellent.

Second, we calculated the discrimination slope, a complementary metric that provides greater intuition regarding the magnitude of incremental predictive ability contributed by an augmented model.16-18,20 The discrimination slope is computed as follows:

average (pˆ event) — average (pˆnoevent),

with representing the mean value of the predicted probabilities resulting from the logistic regression model of the dependent variable event on a given set of predictors. Alternately stated, the discrimination slope is the difference between the average predicted probabilities of sample members experiencing the outcome and the average predicted probabilities of sample members not experiencing the outcome. Improvements in the discrimination slope, termed integrated discrimination improvement (IDI), are reported both as the level difference between a baseline and augmented model and as the percent improvement associated with the augmented model relative to the baseline model. We employed a split-sample approach to compute all C-statistics and discrimination slopes. This approach involves randomly dividing the sample into 2 subsamples, the first of which is used to fit the model (n = 17,043). The resulting estimates are then applied to the withheld validation sample (n = 17,044), with which the metrics of interest and associated 95% confidence intervals are computed using a 500 replicate bootstrap procedure. We also bootstrapped the difference in the C-statistic and discrimination slope between each augmented model and the baseline model to determine the statistical significance of any marginal gain in predictive performance.

Finally, we computed measures of sensitivity, specificity, and positive and negative predictive values associated with the baseline demographic model compared with the specification employing all the HNA measures. In keeping with the related literature, we chose the 50th, 75th, and 90th percentiles in the predicted risk distribution as our threshold values.15,21 These results are particularly important for case-finding applications, as stakeholders must decide upon a risk threshold at which a program (or additional screening measure) will be administered.


Descriptive statistics are displayed in Table 1. Excepting cancer and top ED use, each condition was positively associated with membership in the top utilization and top cost deciles. Similarly, the behavior, prescription drug, and previous year’s utilization measures were all positively correlated with membership in the top utilization and cost deciles. Both access to care measures exhibited modest negative associations with membership in the top ED decile and modest positive associations with the hospitalization and cost outcomes.

Tables 2


Predictive performance of the multivariate specifications is displayed in and . For top ED utilization, Cstatistics ranged between 0.67 for the baseline specification and 0.74 for the richest HNA specification. The past-year utilization and condition specifications provided the greatest incremental predictive improvement over baseline; both behaviors and prescriptions were also associated with appreciable increases in predictive performance. In contrast, the access-to-care domain provided no meaningful improvement in predictive ability. Comparing discrimination slopes yielded similar conclusions (Table 3).

Predictive performance, measured by either the C-statistic or the discrimination slope, was lowest for the hospitalization outcome (C-statistic for richest specification = 0.67). Here again the conditions and utilization domains offered the greatest incremental increases in predictive ability over baseline (C-statistic of 0.65 and 0.63 for conditions and past utilization specifications, respectively, vs 0.59 for the baseline model). Prescriptions and behaviors both contributed meaningful improvements in predictive accuracy over baseline (IDI of 90% and 60%, respectively), while the access-to-care specification constituted a negligible (albeit statistically significant) improvement.

Similar to the progression of models predicting high ED use, the inclusion of the HNA predictors improved the performance of models predicting membership in the top cost decile sufficiently, such that the richest specification met the Hosmer-Lemeshow rule-of-thumb threshold for acceptability. Specifically, the C-statistics ranged from 0.61 for the baseline model to 0.72 for the richest HNA specification. For the cost outcome the block of condition predictors was associated with the greatest marginal improvement (C-statistic of 0.69; IDI of 267%). In contrast to the other 2 outcomes, the past year’s utilization specification was ranked third with respect to incremental performance improvement; however, it is important to note that while the specification’s relative performance was lower, the level of incremental predictive ability remained considerable (IDI of 153%). Also of note is that the relative contribution of prescription drugs was much higher for the cost outcome compared with the other 2 outcomes (IDI of 248% for cost, compared with 48% and 90% for ED and inpatient utilization, respectively). Importantly, the incremental predictive contribution of the HNA measures was highest for the cost outcome (IDI of 413% for specification including all HNA measures).

Table 4

displays the sensitivity, specificity, and positive and negative predictive values by risk threshold associated with the baseline model and the specification including all HNA measures. For each outcome, the HNA specification was associated with appreciable improvements in sensitivity—especially at the 75th and 90th percentiles—with no resulting decreases in specificity. Similarly, the HNA specifications improved positive predictive value across all thresholds, with especially large improvements seen at the 90th percentile, with no associated decline in negative predictive value. The tradeoff between sensitivity and specificity at different risk thresholds is striking, and underscores the tensions inherent in choosing a threshold at which to target case-finding applications of the underlying predictive model. As is expected given the low prevalence of the outcome measures and in keeping with similar studies,15 positive predictive values were fairly low, even at the 90th percentile of the risk distribution (HNA specification: 0.27 for the ED outcome; 0.22 for any hospitalization; and 0.30 for high cost).

Sensitivity Analysis and Limitations

We also performed a number of sensitivity analyses to assess the robustness of these findings across additional specifications. First, we estimated an additional specification including a dummy variable reflecting having a comorbidity (2 or more enumerated conditions) in addition to the full set of HNA measures.22,23 This additional covariate added no incremental predictive power. Second, we estimated models employing top-decile, ambulatory care—sensitive ED visits as the ED outcome measure, using the algorithm created by Billings and colleagues.24 Results were very similar to the specifications modeling top-decile total ED visits (available from the authors upon request).

A potential limitation of our analysis is that it was constrained to use the particular HNA as designed by the Wisconsin DHS. The HNA did not include several of the best established predictors of future health costs and utilization, including general health status and functional and activity limitations, and all-cause past utilization over the past year.25-28 Such omissions, therefore, suggest that our estimates represent a conservative estimate of the potential predictive ability associated with HNA administration to new adult Medicaid enrollees.


We found that a simple, self-reported health needs assessment collected via a Medicaid enrollment system was meaningfully predictive of future healthcare utilization for a sample of new childless adult enrollees. For the outcomes of high ED utilization and high cost, the HNA measures combined with demographic measures demonstrated acceptable predictive performance and were associated with large incremental predictive improvements over demographic variables alone for each of the 3 outcomes, with the largest incremental improvements achieved for the high cost outcome. It is encouraging that the predictive performance of the HNA approaches that achieved in a claims-based study on a comparable Medicaid population in Vermont.15 Two corroborating studies using within-sample comparisons found that the predictive ability of a self-reported health screener approaches but does not quite meet that exhibited by recent claims history.29,30

The Wisconsin experience shows that the use of HNAlike instruments via Medicaid application systems holds great promise for prospective assessment of new enrollees. Medicaid agencies deciding whether and how to use an HNA-like instrument in predictive modeling applications face several important issues, however. As is the case with all risk adjustment applications, agencies will need to work assiduously to ensure that provider groups believe in the legitimacy and fairness of an HNA-based risk model. Medicare’s long standing experience with using survey data as a frailty risk adjuster could serve as an instructive guide in navigating this and other issues inherent in implementing survey-based risk adjustment.27,28,31,32 Agencies interested in using an HNA to target case management and/or other specialized services must be mindful that the positive predictive value of the resulting model is likely to be low. In recognition of this limitation, traditional disease management programs often use predictive modeling as a first screen, complemented by a subsequent screen typically involving follow-up by a nurse case manager.29 Additionally, conducting a business case analysis similar to that pioneered by Billings and colleagues33,34 would give stakeholders a sense of the likely fiscal impacts associated with a case-finding intervention employing a predictive model. We conclude with a final note that, as was the case in Wisconsin, HNA instruments are often designed for several purposes, many of which are not predictive in nature.35 Designing an effective HNA will require balancing its predictive goals with

the demands of its other stated objectives.Author Affiliations: Department of Health Policy and Administration, School of Public Health, University of Illinois at Chicago, IL (LJL); Population Health Institute, School of Medicine and Public Health, University of Wisconsin—Madison (DF, KV); and McCourt School of Public Policy, Georgetown University, Washington, DC (TD).

Funding Source: This study received financial support from the Robert Wood Johnson SHARE initiative.

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (LJL); acquisition of data (DF, KV); analysis and interpretation of data (LJL, KV, TD); drafting of the manuscript (LJL, TD); critical revision of the manuscript for important intellectual content (DF, KV, TD); statistical analysis (LJL); provision of study materials or patients (DF); obtaining funding (LJL); administrative, technical, or logistic support (DF); and supervision (LJL, DF, TD).

Address correspondence to: Lindsey Jeanne Leininger, PhD, Assistant Professor, Department of Health Policy and Administration, University of Illinois — Chicago School of Public Health, 1603 W Taylor St, Chicago, IL 60612. E-mail: Medicaid benefits: targeted case management. The Henry J. Kaiser Family Foundation website. Accessed June 19, 2013.

2. Llanos K, Rothstein J, Dyer MB, Bailit M. Physician pay-forperformance in Medicaid: a guide for states. Center for Health Care Strategies website. Accessed February 19, 2012.

3. Winkelman R, Damler R. Risk adjustment in state Medicaid programs. Society of Actuaries website. Accessed February 18, 2012.

4. Klein K, Glied S, Ferry D. Entrances and exits: health insurance churning, 1998-2000. Issue Brief (Commonw Fund). 2005;855:1-12.

5. Sommers BD. 2009. Loss of health insurance among non-elderly adults in Medicaid. J Gen Intern Med. 2009;24:1-7.

6. Wisconsin Department of Health Services. Health Insurance for Childless Adults. Waiver Proposal Submitted to the Centers for Medicare and Medicaid Services. Madison, WI: Wisconsin Department of Health Services; 2008.

7. Perrin NA, Stiefel M, Mosen DM, Bauck A, Shuster E, Dirks EM. Self-reported health and functional status information improves prediction of inpatient admissions and costs. Am J Manag Care. 2011;17(12):e472-e478.

8. Maciejewski ML, Liu CF, Derleth A, McDonnell M, Anderson S, Fihn SD. The performance of administrative and self-reported measures for risk adjustment of Veterans Affairs expenditures. Health Serv Res. 2005;40(3):887-904.

9. Byrd VL, Verdier J. Collecting, Using, and Reporting Medicaid Encounter Data: A Primer for States. Washington, DC: Mathematica Policy Research; 2011.

10. Rudd RE, Moeykens BA, Colton TC. Health and literacy: a review of medical and public health literature. [Originally in The Annual Review of Adult Learning and Literacy. 1999;1(5).] National Center for the Study of Adult Learning and Literacy website. Accessed April 19, 2013.

11. Knutson D, Bella M, Llanos K. Predictive modeling: a guide for state purchasers. Center for Health Care Strategies website. Published 2009. Accessed April 19, 2013.

12. Verdier JM, Byrd V, Stone C. Resource paper — Enhanced primary care case management programs in Medicaid: issues and options for states. Center for Health Care Strategies website. Published 2009. Accessed April 19, 2013.

13. Ash AS, Ellis RP. Risk-adjusted payment and performance assessment for primary care. Med Care. 2012;50(8):643-653.

14. Medicare—hospital quality initiatives—outcome measures. CMS website. Accessed April 19, 2013.

15. Weir S, Aweh G, Clark RE. Case selection for a Medicaid chronic care management program. Health Care Financ Rev. 2008;30(1):61-74.

16. Shwartz M, Ash AS. Empirically evaluating risk adjustment models. In: Iezzoni LI, ed. Risk Adjustment for Measuring Health Care Outcomes, 4th ed. Chicago, IL: Health Administration Press; 2013:249-300.

17. Pencina MJ, D’Agostino RB Sr, D’Agostino RB Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157-172.

18. Steyerberg EW, Vickers AJ, Cook NR, et al. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010;21(1):128-138.

19. Hosmer DW, Lemeshow S. Applied Logistic Regression, Second Edition. San Francisco, CA: John Wiley & Sons; 2000.

20. Pencina MJ, D’Agostino RB, Vasan RS. Statistical methods for assessment of added usefulness of new biomarkers. Clin Chem Lab Med. 2010;48(12):1703-1711.

21. Billings J, Mijanovich T. Improving the management of care for high-cost Medicaid patients. Health Aff (Millwood). 2007;26(6):1643-1654.

22. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-383.

23. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27.

24. Billings J, Parikh N, Mijanovich T. Emergency department use: the New York story. Issue Brief (Commonw Fund). 2000;434:1-12.

25. Fleishman JA, Cohen JW. Using information on clinical conditions to predict high-cost patients. Health Serv Res. 2010;45(2):532-52.

26. Fleishman JA, Cohen JW, Manning WG, Kosinski M. Using the SF-12 health status measure to improve predictions of medical expenditures. Med Care. 2006; 44(5 suppl):I54-I63.

27. Gruenberg L, Kaganova E, Rumshiskaya A. Updating the Social/HMO AAPCC. Report to the Health Care Financing Administration. Cambridge, MA: DataChron Health Systems Inc.; 1993.

28. Gruenberg L, Silva A, Leutz W. An Improved Disability-based Medicare Payment System for the Social/HMO. Report to the Health Care Financing Administration. Cambridge, MA: The Long-Term Care Data Institute; 1993.

29. Andersen DR, Mangen DJ, Grossmeier JJ, Staufacker MJ, Heinz BJ. Comparing alternative methods of targeting potential high-cost individuals for chronic condition management. J Occup Environ Med. 2010;52(6):635-646.

30. Chaudhry S, Jin L, Meltzer D. Use of a self-report-generated Charlson Comorbidity Index for predicting mortality. Med Care. 2005; 43(6):607-615.

31. Robinson JM, Karon SL. Modeling Medicare costs of PACE (Program of All-Inclusive Care for the Elderly) populations. Health Care Financ Rev. 2000;21(3):149-170.

32. Kautter J, Ingber M, Pope GC. Medicare risk-adjustment for the frail elderly. Health Care Financ Rev. 2008;30(2):83-93.

33. Billings J, Dixon J, Mijanovich T, Wennberg D. Case finding for patients at risk of readmission to hospital: development of algorithm to identify high risk patients. BMJ. 2006; 333(7563):327.

34. Billings J, Mijanovich T. Improving the management of care for high-cost Medicaid patients. Health Aff (Millwood). 2007;26(6): 1643-1654.

35. Goetzel RZ, Staley P, Ogden L, et al. A framework for patientcentered health risk assessments — providing health promotion and disease prevention services to Medicare beneficiaries. CDC website. Published 2011. Accessed February 26, 2013.

Related Videos
Ali Khawar
Neil Goldfarb, CEO, Greater Philadelphia Business Coalition on Health
Screenshot of Mary Dunn, MSN, NP-C, OCN, RN, during a video interview
Inma Hernandez, PharmD, PhD, professor at the University of California, San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences
Carrie Kozlowski
Carrie Kozlowski, OT, MBA
Carrie Kozlowski, OT, MBA
kimberly westrich
Bruce Sherman, MD
Related Content
CH LogoCenter for Biosimilars Logo