• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Researchers Uncover Potential Racial Biases in T2D Risk Prediction Models

Article

Authors offered specific recommendations for improving publishing and adoption of algorithmic processes focused on type 2 diabetes (T2D) going forward.

The Prediabetes Risk Test (PRT) currently adopted by the United States’ health care system and prognostic type 2 diabetes (T2D) prediction models available for adoption likely are attached to some degree of racial bias, new study results published online today in PLOS Glob Public Health show.

This racial bias may in turn perpetuate inequities by providing fewer benefits to minorities who are already at a higher risk of metabolic diseases, authors wrote.

To improve the standards of publishing and adoption of algorithmic processes in health care, the researchers provided specific recommendations, including for any published and/or candidate diagnostic or prognostic models, to demonstrate algorithmic fairness prior to adoption.

Early detection of those at high risk for T2D can help tackle the diabetes epidemic, as it allows for targeted intervention, the researchers explained. However, despite their relatively lower risk of developing T2D, non-Hispanic White individuals are overrepresented in risk prediction studies, meaning the implementation of evidence-based risk prediction could have limited generalizability to other racial groups.

“Biased prediction models may prioritize individuals of certain racial groups for preventive action at different rates or at different stages in their disease progression. Such unequal predictions would exacerbate the systemic health care inequalities we are currently seeing, which stem from socioeconomic inequalities, differential health literacy and access to health care, and various forms of discrimination between majority and minority populations,” the authors said.

To better understand whether the PRT issued by the National Diabetes Prevention Program, the Framingham Offspring Risk Score, and the Atherosclerosis Risk in Communities (ARIC) Risk Model exhibited racial bias between non-Hispanic White individuals and non-Hispanic Black individuals, investigators assessed National Health and Nutrition Examination Survey (NHANES) data. The data were sampled in 6 two-year batches from 1999 and 2010.

Nearly 10,000 adults without a history of diabetes and with available fasting blood samples were included in the study. The investigators calculated race- and year-specific average risks of T2D based on the models and then compared risks with ones observed from the US Diabetes Surveillance System.

Their analyses revealed the following:

  • All investigated models were found to be miscalibrated with regard to race, consistently across the survey years
  • The Framingham Offspring Risk Score overestimated T2D risk for non-Hispanic White individuals and underestimated risk for non-Hispanic Black individuals
  • The PRT and the ARIC models overestimated risk for both races

The PRT and ARIC models also overestimated risk of T2D among White individuals more severely than for Black individuals.

“This may result in a larger proportion of non-Hispanic Whites being prioritized for preventive interventions, but it also increases the risk of overdiagnosis and overtreatment in this group. On the other hand, a larger proportion of non-Hispanic Blacks may be potentially underprioritized and undertreated,” the researchers said.

Overall, the 3 models consistently predicted higher average risks for White individuals than Black individuals, which is in contrast with official national statistics. One solution to the algorithmic biases observed could be the inclusion of additional markers related to education, health literacy, and other socioeconomic determinants expected to correlate with race.

In recent years, algorithmic decision-making has become a key part of health care, the authors added, but “regardless of how powerful artificial intelligence models can be in capturing complex interactions and patterns in data, the appropriateness of the available datasets in terms of representativeness and quality remains crucial to algorithmic design.”

They also offered recommendations for sample size considerations in developing new algorithms:

  • Develop models in nationally representative populations
  • Develop models in cohorts with roughly equal sample sizes across groups, resulting in models that are expected to perform with the same confidence and precision across groups, but may perform less optimally for the majority
  • Develop separate models in separate groups

Because findings were reported ratios of average predicted–to–average observed T2D incidences as a summary measure within different racial groups, results can only be interpreted at the group level, marking a limitation. Future investigations could include a wider range of models for testing, the authors wrote.

Reference

Cronjé HT, Katsiferis A, Elsenburg LK, et al. Assessing racial bias in type 2 diabetes risk prediction algorithms. PLOS Glob Public Health. Published online May 17, 2023. doi:10.1371/journal.pgph.0001556

Related Videos
Pat Van Burkleo
Robert Groves, MD
Pat Van Burkleo
Hayden Klein interviews Nadine Barrett, PhD, with an AJMC On Location sign between them
James Robinson, PhD, MPH, University of California, Berkeley
Carrie Kozlowski
Carrie Kozlowski
Carrie Kozlowski, OT, MBA
Carrie Kozlowski, OT, MBA
Shawn Gremminger
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.