• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Dr Stephen Speicher Navigates the Ethics of AI in Health Care

Commentary
Video

Stephen Speicher, MD, Flatiron Health, highlights the organization's frameworks for ensuring ethical artificial intelligence (AI) usage, emphasizing the importance of providers and health systems asking critical questions when considering AI tools.

In an interview at the Association of Cancer Care Centers’ (ACCC) 50th Annual Meeting & Cancer Center Business Summit, Stephen Speicher, MD, senior medical director and head of health care quality and safety at Flatiron Health, discusses the integration of artificial intelligence (AI) in health care. Here, he emphasizes the importance of patient safety, ethical use, and regulatory considerations, along with Flatiron Health's frameworks to ensure ethical AI usage.

Transcript

What were some of the main points you discussed during the deep dive into artificial and business intelligence technology?

The session is really talking a lot about artificial intelligence in health care. We went through some of the basics, making sure everyone had a good overview. And then I spent the majority of my time talking about patient safety and health care quality as it relates to artificial intelligence in health care. So, obviously, a very new and exciting tool and technology, and really trying to figure out how do we think about safety and quality as we think about using these tools in health care, knowing that patients lives are on the line, and that we really need to be thoughtful about how we're using them.

What are some frameworks that Flatiron Health is putting in place to make sure AI is being used ethically?

I think we ask ourselves a lot of different questions. As we think about different problems that we can solve using artificial intelligence, we start to really evaluate what are the specific use cases, how risky are they. So, there's really a big differentiation between a kind of more business or back-end use case vs a really more front-end or clinical use case for thinking about things like scheduling or automating notes or prior authorization—some of those back-end aspects that are not as clinically relevant. We know that those are less risky in general, so that's how we think about when we're deciding what we're going to go into, we really think about the risk tolerance for those different use cases. If we're talking about something that's much more clinically impactful, so thinking about treatment decisions or diagnosis, we're obviously going to have to be a little bit more risk averse as we think about using this technology in those specific ways.

The next thing we think about is how do we make sure that we're not incorporating bias into these tools? One of the biggest things that we're talking about is health equity right now in health care. We know that artificial intelligence has the ability to either improve equity for different patients or further inequities in health care. So, how do we think about our underlying data infrastructure? Is there potential bias in that data infrastructure? And can that lead to bias in the actual tool? Then we think about the data quality. Data is the underpinning of these models. Data is so crucial to these models being good, being valid, being able to actually be utilized in health care, and so we have to know, what is the quality of data that we have? Do we have any kind of quality metrics we need to put around the data? How do we evaluate quality of that data? How do we look at the historical data? All those things are super crucial.

And I would say the last thing we're always thinking about is how these tools are being used. Knowing that these tools are going to either be used by clinicians or nurses or physicians, we want to make sure that they know exactly what these tools do. We want to have full transparency into kind of what the artificial intelligence is actually doing. And make sure that providers know what they're supposed to do, what they're not supposed to do, and really how they should be utilized in a clinical setting.

One of the big things right now is, who's going to be regulating this? Broadly across artificial intelligence, there's a variety of different people that are thinking about regulation. When we think about it within health care, there's a really big push to get these things regulated. There's pretty unanimous opinions that artificial intelligence in health care should be regulated; the question becomes, "By who?" There's a lot of fragmentation in the regulatory framework within health care, so is this supposed to be regulated by the FDA or HHS?

We have a lot of things that are coming out in real time about what those regulations are going to be and who's going to be doing that. The White House just put out a statement on this last year, and so they have an entire document that goes through safe and effective use of artificial intelligence, Health and Human Services is going to be probably putting something out in the next year as well, and then there's a big new rule under the Office of the National Coordinator—ONC—called HTI-1 [Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing], and that's a big regulatory rule for health care IT companies to be looking at and trying to figure out how do we make sure that these products are safe.

I would just say, with all those regulatory pressures, it's great because I think that's making sure that safety and quality is front of mind. That being said, they're likely not going to be rolling these regulatory frameworks out for a little while, so that's why it's so important for health care IT vendors and also for providers to be thinking about how do we really assess the safety and quality of these tools. I think there's a lot of onus on the side of the providers, the health care administrators, the hospitals, and health systems that, as you're getting pitched some of these tools, really start to ask good questions. Make sure you get demos. Ask them, "What does this tool do? How do we know it's going to be valid? What are your data quality standards? How is it going to be implemented?" Those are all really good questions for providers and for health systems and administrators to start to think about as you're starting to have these conversations with these AI vendors.

Related Videos
Yael Mauer, MD, MPH
Pregnant Patient | image credit: pressmaster - stock.adobe.com
Amit Singal, MD, UT Southwestern Medical Center
Dr Julie Patterson, National Pharmaceutical Council
Diana Isaacs, PharmD
Beau Raymond, MD
Binod Dhakal, MD, Medical College of Wisconsin, lead CARTITUDE-4 investigator
Dr Sophia Humphreys
Robert Zimmerman, MD
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.