Measuring Continuity of Care for Diabetes: Which Visits to Include?

The American Journal of Managed CareSeptember 2023
Volume 29
Issue 9

This study examined how inclusion of different provider specialties affected Continuity of Care Index values, year-to-year stability, and association with emergency department visits.


Objectives: Continuity of care measures are widely used to evaluate the quality of health care delivery, but which visits are included vary across studies. Our objective was to determine how the provider specialties included affect continuity values, year-to-year stability, and association with emergency department (ED) visits.

Study Design: Retrospective study of Alabama Medicaid administrative data.

Methods: We included beneficiaries with diabetes who had at least 3 outpatient visits in each of 2018 and 2019 (N = 9578). We defined 3 provider groupings: all providers, diabetes-broad (primary care, cardiology, neurology, endocrinology, ophthalmology, nephrology, and psychiatry), and diabetes-narrow (primary care and endocrinology). Continuity of care was calculated using the Continuity of Care Index (COCI) for each provider grouping. We compared correlation between measures and from year to year using Spearman correlations, and we used multivariable logistic regression to determine association with ED visits.

Results: The mean COCI was 0.54 using visits with all providers, 0.64 with diabetes-broad providers, and 0.83 with diabetes-narrow providers. COCI with diabetes-narrow providers was moderately correlated with the broader sets of providers (Spearman ρ, 0.52-0.65). Comparing each participant’s COCI in 2018 with that in 2019, the mean intraperson difference was similar (0.16-0.22), and correlation was moderate (Spearman ρ, 0.41-0.47) for each measure. COCI had similar weak association with ED visits using each provider grouping (odds ratio, 0.99; 95% CI, 0.98-0.99 for each 0.1-unit difference in COCI).

Conclusions: Continuity values differed substantially depending on which provider specialties were included. The importance of this variation is uncertain, as continuity was weakly associated with ED visits using each of the measures.

Am J Manag Care. 2023;29(9):e274-e279.


Takeaway Points

This study examined how different provider specialty groupings (defined within this study) affected Continuity of Care Index (COCI) values, year-to-year stability, and association with emergency department visits.

  • COCI values were lower when calculated using broader provider groupings, and the narrowest measure was only moderately correlated with the 2 broader measures (Spearman ρ, 0.52-0.65).
  • Year-to-year COCI was only moderately correlated among individuals (Spearman ρ, 0.41-0.47), which means that appropriate timing of continuity relative to an outcome of interest is important.
  • COCI had weak association with emergency department visits using each provider grouping (odds ratio, 0.99; 95% CI, 0.98-0.99 for each 0.1-unit difference in COCI).


Continuity of care is an important aspect of high-quality health care,1 and attempts to quantify continuity of care have been widespread in the health services research literature for decades.2 Translating conceptual aspects of continuity of care into measures that can be applied to clinical and administrative data has proven challenging, and more than 30 different measures have been proposed and studied.3 Although analytic approaches of the alternative measures vary, in general, having more visits with fewer individual providers results in higher continuity. Higher continuity has been shown to be associated with improved outcomes in measures such as costs,4 utilization of a variety of procedures,5,6 emergency department (ED) visits,7 and hospitalizations,4,8 although results have been mixed.5,7,9

Within the field of diabetes, continuity of care is particularly important. Glycemic control, blood pressure control, and monitoring for related complications (eg, retinopathy, nephropathy) are critical for avoiding morbidity,10,11 and these objectives necessitate coordination from primary care and potentially numerous specialists. Studies that have focused on diabetes have also found benefits to higher continuity of care, such as better medication adherence,12 better glycemic control,13 reductions in ED visits14,15 and hospitalizations,15-17 and lower mortality.16

However, there remains a lack of consensus on how to operationalize continuity measures in empiric studies. Particularly when measuring disease-specific care, it is not clear whether including visits with all provider specialties is appropriate (because with most measures of continuity of care, including visits with providers unrelated to diabetes care would decrease continuity values), and analytic practices seem to vary. Studies exploring the continuity of care in patients with diabetes have included all providers,18 all providers with the exception of eye specialists,19 only visits related to diabetes (based on billing codes),15 only a subset of providers who were “plausible referral specialties for diabetes-related comorbidities or monitoring,”14 or another similarly narrow group of providers (eg, primary care, cardiologists, endocrinologists, podiatrists, and ophthalmologists).17,20 To our knowledge, no previous studies have explored how the choice of providers affects continuity of care values, their interpretation, or the association with outcomes such as acute care utilization.

The purpose of the current study is to explore the variation in measurements of continuity of care according to which provider specialties are included. Specifically, we sought to determine how 3 alternative groupings of provider specialties included in calculating continuity of care affected continuity, year-to-year stabilityamong individuals, and association with ED visits. We hypothesized that the narrower groupings of provider specialties would yield higher continuity scores that were more stable year to year and more strongly associated with ED visits and that continuity scores calculated using the various measures would not be strongly correlated.


Study Design and Population

For this study, we utilized administrative data from the Alabama Medicaid program. Alabama did not expand Medicaid as part of the Affordable Care Act, so eligibility for adults is largely limited to those who receive disability benefits, are pregnant, or are 65 years or older and meet income requirements. In 2020, there was a mean of 1,069,624 Medicaid enrollees in Alabama each month (21.3% of the total population), although slightly more than half of those were children.21 To be included in the sample, individuals had to have uninterrupted coverage in calendar years 2017, 2018, and 2019 and be aged 18 to 64 years for the entire study period. We used the CMS Chronic Conditions Data Warehouse definition to identify individuals with diabetes, requiring at least 1 claim with a diabetes diagnosis from 2017 (to identify diabetes at the start of 2018) and 1 from 2018 (to identify diabetes at the start of 2019) as well as at least 3 total claims with a diabetes diagnosis or a diabetes medication from 2010 to 2019. Because calculations for continuity are generally considered less stable for those with too few visits, we only included those with at least 3 outpatient visits in both 2018 and 2019 using the narrowest group of providers (diabetes-narrow, defined later). This study was approved by the institutional review board at the University of Alabama at Birmingham, and the requirement for informed consent was waived.


Demographics and comorbidities. We obtained basic demographic data from Alabama Medicaid’s administrative records, including age (as of January 1, 2018), race and ethnicity (categorized as White, Black, Hispanic, and other), and gender (categorized as male and female). We used diagnoses from billing codes to calculate a Charlson Comorbidity Index score22 and prevalence of 6 individual comorbidities (chronic kidney disease, myocardial infarction, congestive heart failure, diabetes complications, cancer, and pulmonary disease).

Ambulatory visits and ED visits. Using outpatient administrative records, we identified each ambulatory visit with a physician or advanced practice provider (APP; including nurse practitioners or physician assistants) and identified a specialty for each visit based on Medicaid billing codes. We also identified each ED visit.

Provider specialty groupings. We defined 3 groupings of specialties for our analyses. The broadest grouping included all providers. Because we examined only ambulatory visits with physicians or APPs, this group would not include ED visits or visits with allied health professionals, such as physical therapists. The intermediate grouping (“diabetes-broad”) was intended to include specialties that could deliver diabetes-related care or care for complications that may be more common in patients with diabetes. For this group, we included visits with primary care, cardiology, neurology, endocrinology, ophthalmology, nephrology, and psychiatry providers. The narrowest grouping (“diabetes-narrow”) was intended to include only specialties that would likely be primarily managing the glycemic aspects of diabetes, which included primary care and endocrinology. Thus, the diabetes-narrow providers are a subset of the diabetes-broad providers, who are a subset of all providers.


First, we calculated Continuity of Care Index (COCI) scores using each of the 3 provider specialty groupings for each year separately (2018 and 2019) using the following formula:

In this formula, n is the total number of visits, ni is the number of visits with provider i, and p is the total number of unique providers. The COCI ranges from 0 to 1, with more visits dispersed across fewer providers yielding higher values, which signify higher continuity. We then calculated a mean COCI score for each participant (by averaging the scores from 2018 and 2019), and for 2019, we calculated a rolling monthly COCI score based on the prior 12 calendar months.

To compare COCI scores using each provider grouping, we calculated the mean and SD. We calculated the correlation of continuity scores between the provider groupings using Spearman rank correlations. To visualize the distribution, we charted the COCI values for each provider grouping.

To compare year-to-year stability of COCI scores from 2018 to 2019 among individuals, we calculated the mean and SD of the absolute intraperson difference (ie, comparing each individual’s 2018 and 2019 values) and Spearman rank correlation within each provider grouping.

Finally, to assess the association of COCI score with ED visits, we constructed separate multivariable logistic regression models using each provider grouping. The unit of analysis for these models was person-month, with each participant contributing 12 units of analysis. For each month in 2019, we examined whether there was an ED visit and used the COCI score for the preceding 12 calendar months as the independent variable (as a continuous variable, with odds ratios [ORs] calculated for each 0.1-unit change in value). We included age, gender, race, insulin use, the 6 separate comorbidities mentioned earlier, and whether there was an ED visit in the previous 12 months as covariables. We used generalized estimating equations to account for individuals contributing multiple observations.


The final analyses included 9578 patients, whose characteristics and health care utilization are shown in Table 1. The mean age was 50.1 years, the majority were women (72.5%), and a relatively similar proportion were White and Black (43.0% and 44.9%, respectively). The mean Charlson Comorbidity Index score was 3.1, and 37.7% used insulin. The mean number of annual ambulatory visits was 14.3 to 14.6 with all providers, and 63.0% to 63.7% had at least 1 ED visit each year.

The COCI scores using each provider grouping are shown in Table 2, and the proportions within each 0.1-unit band are shown in the Figure. We hypothesized that the narrower groupings of provider specialties would yield higher continuity scores, which they did. The mean COCI score was 0.54 using visits with all providers, 0.64 with diabetes-broad providers, and 0.83 with diabetes-narrow providers. In the diabetes-narrow grouping, 63.3% of participants had perfect continuity, whereas only 17.4% in the all-providers group had perfect continuity. We also hypothesized that continuity scores calculated using the various measures would not be strongly correlated, which was partially true. The diabetes-broad grouping was strongly correlated with the all-providers grouping (Spearman ρ = 0.82), but the diabetes-narrow grouping was more moderately associated with the diabetes-broad (Spearman ρ = 0.65) and all-providers (Spearman ρ = 0.52) groupings.

Year-to-year stability of COCI scores is also shown in Table 2. In each provider grouping, the means in 2018 and 2019 were identical. We hypothesized that the narrower groupings of providers would be more stable year to year, but the mean intraperson difference was similar across the 3 measures, ranging from 0.16 in the diabetes-narrow grouping to 0.22 in the diabetes-broad grouping. Spearman correlations from year to year ranged from 0.41 to 0.47.

The results of multivariable logistic regression models are shown in Table 3. We hypothesized that the narrower groupings of providers would be more strongly associated with ED visits. However, COCI score had a small but statistically significant association with ED visits, with similar findings using each set of providers (OR, 0.99; 95% CI, 0.98-0.99). Each of the 6 comorbidities and insulin use were significantly associated with ED visits (point estimate of ORs ranging from 1.09 to 1.39), whereas an ED visit in the prior 12 months was the factor most strongly associated with a current-month ED visit (point estimate of the ORs ranging from 2.05 to 2.06).


In this study, we found that continuity of care, as measured by the widely used COCI, differed substantially depending on which type of provider visits were included, with the narrowest measure appearing only moderately correlated23 with the 2 broader measures. Year-to-year continuity was only moderately correlated within participants, and continuity had only a small association with future ED visits, regardless of which grouping of providers was used to calculate continuity.

The importance of the difference in continuity that we identified depending on which provider visits were included is uncertain. Previous studies have examined correlation of continuity using different measures, such as the COCI vs the Usual Provider of Care index (an alternative continuity measure that calculates the proportion of visits with a single provider), and found strong correlation between measures (ie, Pearson correlation coefficients > 0.85). Our findings suggest that it likely matters more which visits are considered (eg, diabetes-narrow providers vs all providers) than which measure of continuity is used (eg, COCI vs Usual Provider of Care index).

Additional empirical research could be helpful for determining which provider visits should be included in continuity of care calculations to maximize their value in identifying patterns of health care delivery that are associated with the best outcomes. We believe that our method of comparing alternative provider groupings to evaluate which is most strongly associated with an outcome of interest is a promising approach. We found only very weak association between COCI score and future ED visits, with similar results for grouping of specialists, which could have several implications. Although ED visits are an important outcome—and one that high-quality outpatient care should ideally have a role in preventing—previous studies of the association of continuity of care with ED visits have had mixed results.7,9 Our cohort also had very high ED utilization (nearly two-thirds with at least 1 ED visit each year, although other studies have also found very high rates of ED utilization among patients with Medicaid),7,24 so continuity of care may have just been overwhelmed by other factors when predicting ED use in our population. Additionally, we found that prior-year ED visits were a strong predictor of future ED visits, and this variable has not typically been included in analyses of other similar studies. It would be helpful to standardize which variables should be included in future studies of the association of continuity of care with ED visits so that results can be more easily compared across studies.

One important factor that we identified in this study was that narrower provider groupings led to less normal distribution of continuity of care values, which may limit the COCI’s utility as a measure when only a subset of provider types is included. In our study, nearly two-thirds of the cohort had perfect continuity using the diabetes-narrow providers, even when we averaged over both years. This clustering of values at 1 likely creates ceiling effects that severely limit the ability to identify outcomes attributable to differences in continuity, and thus, we would argue in favor of a measure that identifies greater spread of continuity values. The high proportion of individuals with perfect continuity in this study may at least in part reflect specific policies related to Alabama Medicaid, which assigns beneficiaries to a primary care provider. It is unclear how much using narrower provider groupings is associated with clustering at perfect continuity in other populations.

In addition to more empirical studies evaluating the association of different continuity measures with important clinical and utilization outcomes, more conceptual work is also needed if continuity of care measures are to be relied on as an important indicator of care quality. The continuity of care measure we used in this study,2 which is the most widely used generally, was developed in the 1970s for measuring continuity in the managed care era. In its initial conception, visits that resulted from self-referrals decreased continuity, whereas referrals from primary care providers did not.2 However, the measure has been adapted over time because referral data are often not included in either claims or medical record data that are used to measure continuity. As currently used, the COCI results in perfect continuity only when all the patient’s visits are with a single provider, and any visits with additional providers decrease continuity. Within primary care, a shortage of physicians has prompted a shift toward team-based care,25 and delivery of care by a team (typically a physician and an APP) may reduce the continuity score even though care may be delivered more efficiently. Similarly, the care of individuals with diabetes is becoming increasingly multidisciplinary and interprofessional, including providers beyond physicians and APPs (eg, registered nurses, diabetes educators, pharmacists, social workers). Measures of continuity favoring a larger number of visits with a more limited number of providers might not reflect the highest-quality care in contemporary practice, which may appropriately require a more diverse array of health professionals and visits.

To illustrate, consider the example scenarios shown in Table 4. For patient A, calculating the COCI score with either the diabetes-broad or diabetes-narrow providers grouping yields perfect continuity, because the visits with the orthopedic surgeon are not considered. However, for patient B, the visit with the ophthalmologist does decrease continuity when measured with the diabetes-broad providers. Given that guidelines recommend annual eye examinations for patients with diabetes,10 it seems contrary for such visits to be detrimental to a different metric of quality of care. Similarly, for patient C, a visit with an endocrinologist results in a lower COCI score regardless of the provider grouping used. Conceptually, having multiple providers adjusting glycemic medications should result in lower continuity. However, in many cases, these additional visits with specialists are appropriate, and current measures of continuity are not able to balance the trade-off between the detrimental effect of additional visits on continuity and the potential appropriateness of these additional visits.

Our finding of only moderate year-to-year correlation of continuity has, to our knowledge, not been evaluated previously, but it has important implications. In many studies, continuity of care is treated as an exposure when evaluating its association with other outcomes, such as utilization. However, given that continuity varies over time, it is critical to correctly identify the appropriate time frame of continuity for a particular study. Some studies take a cross-sectional approach, by which continuity is measured over the same time frame as the outcome or condition that is being studied.14 However, in our study, we treated continuity as a time-varying value (as have other studies7-9,12) and used the value from the preceding 12 months for each month that we determined whether an ED visit had occurred. Given the variation over time that we demonstrated, we feel that it is important to ensure that continuity is measured over a time frame prior to any associated outcomes, although the ideal time frame to consider (eg, 6 months, 12 months, or longer) has not been well studied.


Our study is subject to several important limitations. First, our population consisted only of nonelderly adults with diabetes who were enrolled in Alabama Medicaid, so our findings may not be generalizable to other populations. Next, other studies have recognized that number of visits may be a proxy of severity of illness and confound the relationship between continuity and outcomes such as ED visits. Because we were comparing different measures that necessarily resulted in different numbers of visits for each measure, we did not attempt to stratify based on number of visits. However, we attempted to adjust for potential confounding by severity of illness using other means (ie, comorbidities and prior ED visits). Finally, we evaluated only a single continuity of care measure (ie, the COCI) and it is unclear how the choice of which provider visits to include may affect any of the other numerous continuity of care measures.


This study found that continuity of care values differed substantially depending on which provider specialties were included, and correlation between measures was only moderate. Yet, all provider groupings defining continuity demonstrated a similar weak association with ED visits, providing no empirical evidence to suggest which preferred provider visits to include in continuity measures. The importance of this variation is uncertain, and future studies are needed to determine which provider specialties should be included in measures of continuity when researching the quality of health care. Moreover, whether measures of continuity that reward more visits with fewer providers best reflect a modern conceptualization of continuity in the contemporary era or interprofessional, multidisciplinary care for complex chronic illnesses requires additional study and debate. 

Author Affiliations: Department of Medicine, UAB Heersink School of Medicine (KRR, CAP, AAA, CRH, MJM, ALC), Birmingham, AL; Birmingham VA Medical Center (KRR, CAP), Birmingham, AL; Department of Epidemiology, School of Public Health, University of Alabama at Birmingham (LH, EBL), Birmingham, AL.

Source of Funding: The project was supported by the National Institute of Diabetes and Digestive and Kidney Diseases (R18DK109501; Cherrington). Support was also provided by the University of Alabama at Birmingham (UAB) Diabetes Research Center (P30 DK079626; Cherrington) and the National Institute of Arthritis and Musculoskeletal and Skin Diseases (K23 AR080224; Riggs).

Author Disclosures: Dr Levitan reports receiving research funding to UAB from Amgen for cardiometabolic disease research and receiving partial salary support through UAB for work on this manuscript. The remaining authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (KRR, CAP, MJM, EBL, ALC); acquisition of data (AAA); analysis and interpretation of data (KRR, CAP, AAA, CRH, LH, MJM, EBL, ALC); drafting of the manuscript (KRR, AAA, CRH, ALC); critical revision of the manuscript for important intellectual content (KRR, CAP, AAA, CRH, LH, MJM, EBL, ALC); statistical analysis (KRR, LH, EBL); administrative, technical, or logistic support (AAA, ALC); and supervision (ALC).

Address Correspondence to: Kevin R. Riggs, MD, MPH, University of Alabama at Birmingham, 1720 2nd Ave S, MT 610, Birmingham, AL 35294-4410. Email:


1. Haggerty JL, Reid RJ, Freeman GK, Starfield BH, Adair CE, McKendry R. Continuity of care: a multidisciplinary review. BMJ. 2003;327(7425):1219-1221. doi:10.1136/bmj.327.7425.1219

2. Bice TW, Boxerman SB. A quantitative measure of continuity of care. Med Care. 1977;15(4):347-349. doi:10.1097/00005650-197704000-00010

3. Jee SH, Cabana MD. Indices for continuity of care: a systematic review of the literature. Med Care Res Rev. 2006;63(2):158-188. doi:10.1177/1077558705285294

4. Bazemore A, Petterson S, Peterson LE, Bruno R, Chung Y, Phillips RL Jr. Higher primary care physician continuity is associated with lower costs and hospitalizations. Ann Fam Med. 2018;16(6):492-497. doi:10.1370/afm.2308

5. Romano MJ, Segal JB, Pollack CE. The association between continuity of care and the overuse of medical procedures. JAMA Intern Med. 2015;175(7):1148-1154. doi:10.1001/jamainternmed.2015.1340

6. Kern LM, Seirup JK, Casalino LP, Safford MM. Healthcare fragmentation and the frequency of radiology and other diagnostic tests: a cross-sectional study. J Gen Intern Med. 2017;32(2):175-181. doi:10.1007/s11606-016-3883-z

7. Kern LM, Seirup JK, Rajan M, Jawahar R, Stuard SS. Fragmented ambulatory care and subsequent emergency department visits and hospital admissions among Medicaid beneficiaries. Am J Manag Care. 2019;25(3):107-112.

8. Nyweide DJ, Anthony DL, Bynum JP, et al. Continuity of care and the risk of preventable hospitalization in older adults. JAMA Intern Med. 2013;173(20):1879-1885. doi:10.1001/jamainternmed.2013.10059

9. Kern LM, Seirup JK, Rajan M, Jawahar R, Stuard SS. Fragmented ambulatory care and subsequent healthcare utilization among Medicare beneficiaries. Am J Manag Care. 2018;24(9):e278-e284.

10. American Diabetes Association. Standards of Medical Care in Diabetes—2021. Diabetes Care. 2021;44(suppl 1):1-232.

11. Laiteerapong N, Karter AJ, Moffet HH, et al. Ten-year hemoglobin A1c trajectories and outcomes in type 2 diabetes mellitus: the Diabetes & Aging Study. J Diabetes Complications. 2017;31(1):94-100. doi:10.1016/j.jdiacomp.2016.07.023

12. Chen CC, Tseng CH, Cheng SH. Continuity of care, medication adherence, and health care outcomes among patients with newly diagnosed type 2 diabetes: a longitudinal analysis. Med Care. 2013;51(3):231-237. doi:10.1097/MLR.0b013e31827da5b9

13. Parchman ML, Pugh JA, Noël PH, Larme AC. Continuity of care, self-management behaviors, and glucose control in patients with type 2 diabetes. Med Care. 2002;40(2):137-144. doi:10.1097/00005650-200202000-00008

14. Liu CW, Einstadter D, Cebul RD. Care fragmentation and emergency department use among complex patients with diabetes. Am J Manag Care. 2010;16(6):413-420.

15. Chen CC, Chen SH. Better continuity of care reduces costs for diabetic patients. Am J Manag Care. 2011;17(6):420-427.

16. Worrall G, Knight J. Continuity of care is good for elderly people with diabetes: retrospective cohort study of mortality and hospitalization. Can Fam Physician. 2011;57(1):e16-e20.

17. Hussey PS, Schneider EC, Rudin RS, Fox DS, Lai J, Pollack CE. Continuity and the costs of care for chronic disease. JAMA Intern Med. 2014;174(5):742-748. doi:10.1001/jamainternmed.2014.245

18. Cho KH, Lee SG, Jun B, Jung BY, Kim JH, Park EC. Effects of continuity of care on hospital admission in patients with type 2 diabetes: analysis of nationwide insurance data. BMC Health Serv Res. 2015;15:107. doi:10.1186/s12913-015-0745-z

19. Gill JM, Mainous AG 3rd, Diamond JJ, Lenhard MJ. Impact of provider continuity on quality of care for persons with diabetes mellitus. Ann Fam Med. 2003;1(3):162-170. doi:10.1370/afm.22

20. Pollack CE, Hussey PS, Rudin RS, Fox DS, Lai J, Schneider EC. Measuring care continuity: a comparison of claims-based methods. Med Care. 2016;54(5):e30-e34. doi:10.1097/MLR.0000000000000018

21. Alabama Medicaid Agency. FY 2020 Annual Report. 2020. Accessed January 26, 2023.

22. Sundararajan V, Henderson T, Perry C, Muggivan A, Quan H, Ghali WA. New ICD-10 version of the Charlson Comorbidity Index predicted in-hospital mortality. J Clin Epidemiol. 2004;57(12):1288-1294. doi:10.1016/j.jclinepi.2004.03.012

23. Schober P, Boer C, Schwarte LA. Correlation coefficients: appropriate use and interpretation. Anesth Analg. 2018;126(5):1763-1768. doi:10.1213/ANE.0000000000002864

24. Barnett ML, Song Z, Rose S, Bitton A, Chernew ME, Landon BE. Insurance transitions and changes in physician and emergency department utilization: an observational study. J Gen Intern Med. 2017;32(10):1146-1155. doi:10.1007/s11606-017-4072-4

25. Rich EC. The physician workforce and counting what counts in primary care. Fam Med. 2018;50(8):579-582. doi:10.22454/FamMed.2018.595198

Related Videos
Zachary Cox, PharmD
Seun Ross, PhD
Nicolas Ferreyros, Community Oncology Alliance
Vikki Walton, MBA, Mercer
Matthew Crowley, MD, MHS, associate professor of medicine, Duke University School of Medicine.
Dr Seun Ross: Achieving Equity is Necessary to Sustain the Health Care System
Jennifer Sturgill, DO, Central Ohio Primary Care
Kristin Oaks, DO, Central Ohio Primary Care
Donna Fitzsimons
Ryan Haumschild, PharmD, MS, MBA, director of pharmacy, Emory Winship Cancer Institute
Related Content
© 2023 MJH Life Sciences
All rights reserved.