Today’s challenge, according to Microsoft's Iksha Herr, MS, is learning how to leverage artificial intelligence (AI) to process the waves of health care data and to drive insights that lead to better care delivery.
Health systems produce torrents of data—so much so that making sense of all this information to improve patient health is a challenge that few could imagine a generation ago. But those data are exactly what payers and providers need if health care is to move from a fee-for-service to a value-based reimbursement system.
Today’s challenge, according to Iksha Herr, MS, managing director for Data and AI, Healthcare and Life Sciences at Microsoft, is learning how to leverage artificial intelligence (AI) to process all these data and to drive insights that lead to better care delivery.
Herr’s keynote address, “Building a People- and Patient-Centric Healthcare Continuum by Utilizing the Full Potential of Data & AI,” capped the second morning of Patient-Centered Oncology Care® 2021, presented September 23 and 24 by The American Journal of Managed Care® in Nashville, Tennessee.
For Herr, the connection between the life sciences industry and the health care delivery system is best understood as a continuum, one that starts with the industries where drugs are developed before moving to the providers who treat patients and the payers who cover much of the cost. “All of these areas have varying levels of AI use. Data are obviously the lifeblood of all of this,” she said. “So, no doubt, a lot of data are involved for drug discovery.”
AI is not really seen yet at the preclinical stage of drug development, but in the clinical stage the use of AI “is quite mature and will continue to evolve,” Herr said. AI also plays a role once a drug is launched.
The use of AI to drive value-based agreements—in which drug developers are rewarded for how well therapies work—remains a work in progress. Data contained in electronic health records (EHRs) play a big role, and data have long been used in risk assessment and adjustment. Bringing it all together at the provider level requires more work in the policy, data governance, and ethics arenas—and that will take greater coordination among all stakeholders, Herr said.
Right now, clinical trials collect data, and EHRs and other sources—such as payer claims and registries—are well-known generators of real-world data. There are scores of new sources—from digital devices to the internet of things—that collect data for one purpose but can now be leveraged to improve AI and outcomes research. The question: How can life sciences and health care create a framework where everyone agrees on the rules for
using these new data repositories?
This is important for both the future of research and for health care delivery, Herr said. “Research can take place in multiple different with different stakeholders. That‘s where technology is needed as well, for management and curation,” she noted.
In other words, the more data sources grow and evolve, the more technology will be needed to make sense of it all. And beyond the stakeholders in the life sciences and health care, Herr said, the most important stakeholders are ordinary people whose lives can improve through data use. Not everyone is a patient, or a patient all the time—so data from healthy people are just as important.
“We‘re all part of this continuum, because at one point or another, ‘people’ become ‘patients,’” Herr said. “The other thing to consider, because AI is so dependent on data, is that we need data from both patients and people who are not yet patients to understand the trajectory of disease.”
Use of this more complete data set will be the step that leads to better drug development and outcomes, Herr predicted. But getting there will require the AI policy landscape to catch up. Herr reviewed several policy declarations from both Europe and North America, including recent events that could offer guidance in patient privacy and algorithm creation:
• The cities of Amsterdam, the Netherlands, and Helsinki, Finland, created “algorithm registries” to bring transparency to public deployments of AI. The European Union has proposed algorithm standards.
• New York City has proposed rules for using algorithms in recruiting and hiring, while the cities of Boston, Minneapolis, San Francisco, and Portland, Oregon, have imposed bans on facial recognition.
• In January, the FDA proposed an AI/machine learning action plan to govern the use of medical records for algorithm creation. The plan sets software standards and encourages certain best practices for the pharmaceutical industry.1
Until recently, Herr said, there has been tremendous data generation and AI activity, but sharing across the life sciences with payers and providers has been uncommon. To fully realize the power of AI, Herr declared, the health care continuum must be “reimagined” to reach across all the relevant verticals.
“It‘s not just life sciences, providers, and payers, but also academic institutions, technology providers, consortia, research institutions—all of these,” Herr said. “What if there is bidirectional flow of data and AI insights that each of these parts produces, and then shares with the other parts in
the interest of people and patients?”
Data sharing during COVID-19 offers a great example of such a data system, Herr noted. Similarly, Herr added, “What if there were standards—and not just one, but multiple ones—that would help to mature this space?”
Throughout her presentation, Herr built a model that put patients and people at the center, then slowly added in all the stakeholders and regulatory mechanisms that would round out a shared data framework. “My call to action is really centered around the stakeholders,” she said. “Then people and patients can be a big part of that puzzle—providing the volumes of data that are needed.
“However, I think one of the biggest things in AI is representativeness,” Herr stressed. All types of people—sick and healthy, from various points along the continuum—must be included for the data set to function.
Federal agencies, including the US Department of Commerce and the National Institute of Standards and Technology, will focus on implementation
issues. Some exchange protocols are in place, so that when AI is used at a population level there will be adequate governance to ensure privacy and ethical use, but more work is needed in this area.
Why is policy development needed? As Herr explained, there is significant investment in AI—and with the rise of technology, use standards are needed. Federal spending on AI is projected to rise to $13 trillion by 2030. “These are big numbers—a lot is at stake here,” she said. “So, it‘s really imperative for us all to participate, talk about things, and make an impact on these developing areas.”
Artificial intelligence and machine learning software as a medical device. FDA. January 2021. Updated September 22, 2021. Accessed October 27, 2021. https://www.fda.gov/medical-devices/software-medical-device-samd/