Amy Valley, vice president for clinical strategy and technology solutions at Cardinal Health, weighs both the benefits and potential risks of physicians using artificial intelligence technology in their oncology practices.
We need to make sure we're not losing the art of medicine when implementing artifical intelligence (AI) into oncology care, according to Amy Valley, vice president for clinical strategy and technology solutions, Cardinal Health.
What are the most important uses of technology in community oncology and where is the use of technology in practices falling short of potential?
Well, I think it's a nice period that we're in that we have so much innovation going on in technology, and that gives practices a lot of options. But then the problem is, how do you decide which partners, which vendors, and then where we're falling short is the connectivity.
I think many of the products in the market have paid attention to making sure that price points are affordable and there's a good ROI [return on investment] based on the technology for the practice. But the integration cost and the ability to integrate is really I think where we're still falling short, and so that's got to improve.
I think some of the things with the Affordable Care Act around interoperability are going to keep incentivizing that integration to happen. But in the real world, every day practices are dealing with 12-plus vendors just to run their practice and to participate in these models. And there's cost toward having those integrations work, and there's time factors and impact. So, we've got to get better at the integration part of things in order for this all to work and the innovation to be the most meaningful.
We are hearing a lot about the potential of AI. Where can AI most help physicians in practice, and what are the risks?
I think it can help in virtually every aspect of the practice. We think about AI in terms of just being able to manage these large amounts of data coming into the practice and being able—even with decision support and treatment planning—to bring some efficiencies to that process. That, I think of as more of a clinical and clinical operations type of application.
We have a tool, as I mentioned before, that's really looking at applying AI to the massive amounts of data available on social and behavioral determinants of health, and how that impacts a patient’s risk and quality of care. All of those are important—certainly seeing it even in imaging and radiology and applications there; virtually every aspect of care, AI has great potential to help us.
I think where everyone gets concerned is, “What if the solutions aren’t vetted appropriately?”
Who’s really watching and making sure that the quality is there and we’re not just making bad decisions available more quickly to the prescriber rather than taking that time to really understand what's driving the data, what's driving the algorithms, and to make sure that we're not losing the art of medicine as we are advancing and adding these innovative technologies.