News
Article
Author(s):
The evolving landscape of artificial intelligence (AI) use in cancer care requires evaluation of algorithms before implementation and continual monitoring after, explained Amy Abernethy, MD, cofounder of Highlander Health and former FDA official.
There have been “remarkable” advances in the capabilities of artificial intelligence (AI) for cancer care, but the next challenge is scaling these tools while ensuring they are safe and effective for everyday use, said Amy Abernethy, MD, cofounder of a new investment firm, Highlander Health, during a presentation at ESMO Congress 2024. Abernethy previously spent nearly 3 years at Verily and 2 years at the FDA.
When Abernethy was at the FDA, several hundred AI algorithms achieved regulatory clearance, a number which has since increased to nearly a thousand, she said. With the number of algorithms for clinical practice increasing, providers need to be able to evaluate their performance at scale.
“Less than half of the algorithms that we have available to us have been evaluated, and one of the challenges is they often have not been evaluated in our local clinical settings,” Abernethy said. “So, we not only have to know that an algorithm performs as expected, but performs as expected here with my patients.”
The algorithms being used have been generated from data sets that have become increasingly complex. With these data sets being combined with genomics to create more sophisticated algorithms, evaluating them at scale is necessary.
There are 3 ongoing trends that is making evaluation at scale possible, she said. First, data sets that are high quality, complex, and well organized with multimodal data are more available. Second, there are more confident approaches to how algorithms are evaluated in both a regulatory and nonregulatory context. Third, there have been changes in evidence generation.
There are multiple categories of AI use to be evaluated: AI-based medical products; AI for discovery, automation, and administrative tasks; and AI for clinical evidence and data generation.
For AI-based medical products, there are regulatory paradigms in place. One example is a continuous glucose monitor using an AI algorithm to determine glucose levels and the amount of insulin that should be administered. In a closed loop system, the algorithm sends a signal to the insulin pump about how much insulin the patient needs.
There are some instances where regulatory oversight isn’t needed in the use of AI for discovery, automation, and administrative tasks. Algorithms can be leveraged for target discovery for new medications, and then the clinical trials are where the evaluation of the drug takes place. The use of AI in clinic to transcribe case notes or make administrative tasks more efficient can be evaluated in the clinic without the need for regulatory oversight.
Finally, there are now regulatory algorithms that support clinical trials and evidence generation. These algorithms curate data, support generation of real-world data, and help match patients with the right trials.
The regulatory models are evolving, but systemic application of good machine learning practices and thorough premarket evaluation of algorithms when needed followed by continuous monitoring are critical, Abernethy said. Algorithms that are at high risk of harming patients if they’re not safe and effective need to be evaluated before they reach the clinic, but also over time due to AI’s shortcomings, such as its ability to hallucinate.
“Algorithms that can learn have the opportunity to continue to learn and improve across time but also can deteriorate across time if not careful,” she said. “And so, there's the expectation of continued evaluation.”
Similar to oncology, there has been a shift in clinical evidence generation for AI in medicine. The traditional model started as a phase 1 through 4 clinical trial and now there are earlier regulatory approvals happening in phase 2 or 2b based on early approval pathways with more evaluation in the postmarketing setting.
“This story has been playing out in the drug landscape and oncology for quite a while and is a useful paradigm for us as we think about the evaluation of algorithms,” Abernethy said.
In addition to the evolving landscape, the data are evolving too, she said. Information is getting collected during routine episodes of care and in formalized settings like clinical trials. These 2 types of data are also being combined with other data sets, such as genomics data, imaging data, and sensors. All of this data can be organized and create multimodal data sets.
This data allows for the creation of complex study designs, for clinicians to monitor patients across time, and for longitudinal registries that support algorithm development, she explained. This approach allows for new approaches in clinical evidence generation and underpins algorithm development and monitoring across time. Furthermore, embedding AI in the process makes the process faster and more efficient.
“These evolving trends, which we're seeing across oncology and also across medicine, now put us in a place where, as oncologists, we have these remarkable solutions that can be put in place to help better take care of our patients,” Abernethy concluded. While there are evolving regulatory pathways, oncologists “need to be prepared to understand what does good look like with respect to AI algorithms? How do we have confidence to apply these algorithms in our clinic? And how do we look towards the future?”