The first priority isn't limiting the rise in type 1 diabetes (T1D) prevalence, it's limiting the rise of missing prevalence, said Tom Robinson, vice president of global access at JDRF.
Tom Robinson, vice president of global access at JDRF, talks about the Type 1 Diabetes (T1D) Index's role in tracking T1D prevalence, which is expected to double by 2040.
What role does the T1D Index play in tracking T1D prevalence, which is expected to double by 2040?
The first thing is that we do need to be able to track change as it happens over time, and so our plan is to update the index every year and to track progress and hold ourselves in the global community to account. I would argue that our first priority isn't actually limiting the rise in prevalence, it's limiting the rise of missing prevalence--people who should be alive but have died young. That figure is expected to surpass 6 million in the mid 2030s, and data suggests we could save 3 to 4 million of those lives just by using existing technology. In that sense, what I really want to see is more cases of type 1 or more more people living with type 1. I think that's the first priority. But we are also starting to learn how to delay and eventually prevent type 1, and I think the findings of the index lend an added urgency to that work. I think one of the roles that I hope it will play is to hopefully draw more funding to that space and potentially even help identify some new avenues of research, because you've got all this incidence data and with a look at things that might prevent or mitigate. That's the second thing that I'm really happy to see come out of the index.
What calculations were used to create these estimates in prevalence?
We use a Markov chain Monte Carlo model. What that means is, each year, we estimate the number of new cases, then we look at the T1D population, and we apply hazard rates for complications or mortality, and then we update the population numbers with those hazard rates. We just run through that cycle every year, and then we do it 1000 times using the variability in the data to get the Monte Carlo method. And it does get really detailed. For example, there is a specific mortality risk rate for a 20-year-old in Brazil with type 1 who's had it for 10 years in 2015. And that's different than the one for 2020, or if they've had it for 5 years. So that is a lot, and once you start dumping the database, it's hundreds of gigabytes of data. And we didn't actually start with that specificity, by the way. During the research, we kept evaluating our cross-validated predictions and looking at where they're flawed, we gradually dialed up the specificity and added it where it was meaningful and had an impact and made the predictions more accurate, so that's why we've ended up where we've ended up.