News

Article

Oncologists Find Ethical Challenges With Artificial Intelligence in Cancer Care

Author(s):

Explainability, consent, and responsibility are noted as challenges that may hinder implementation of artificial intelligence (AI) in cancer care, according to a new study.

In a recent survey of more than 200 US oncologists, most indicated that oncologists should be able to explain how artificial intelligence (AI) works to use these decision-making models in clinics.

As AI becomes more common across cancer care settings, oncologists are struggling with the ethics of its use in medical decision making. In this study, the researchers aimed to gain an understanding on oncologists’ views around the ethics of AI in a clinical setting.

Physician holds hand of patient | C Davids/peopleimages.com - stock.adobe.com

Physician holds hand of patient | C Davids/peopleimages.com - stock.adobe.com

The descriptive, cross-sectional survey is published in JAMA Network Open.1

"The findings provide a first look at where oncologists are in thinking about the ethical implications of AI in cancer care," Andrew Hantel, MD, a faculty member in the Divisions of Leukemia and Population Sciences at Dana-Farber Cancer Institute and lead researcher of the study, said in a statement.2 "AI has the potential to produce major advances in cancer research and treatment, but there hasn't been a lot of education for stakeholders—the physicians and others who will use this technology—about what its adoption will mean for their practice.”

The population-based survey was conducted from January to July 2023, using data from the National Plan & Provider Enumeration System. A draft instrument was conducted based on 24 questions pertaining to demographics and the following domains: AI familiarity, predictions, explainability, bias, deference, and responsibilities.

The main outcome of the study was oncologists’ responses to the question asking if patients need to provide informed consent for AI model use during cancer treatment decisions.

Of the 387 surveys conducted, 204 were completed. Participants included in the study represented 37 states, 63.7% identified as male, 62.7% as non-Hispanic White, and 29.5% were from academic practices. Additionally, 46.6% had some previous education of health care and AI and 45.3% reported experience with clinical decision models.

Furthermore, 84.8% of participants believed that AI-based clinical decision models should be explainable by oncologists to be used in a clinical setting, and 23% said these models should also be explainable to their patients.

Most (81.4%) participants supported the use of AI models during treatment decisions. When presented with a scenario in which an AI model selects a different treatment regimen than the oncologist planned, 36.8% of participants said they would present both options and let their patient decide. Additionally, oncologists from academic settings were more likely to let their patient decide which treatment option they wanted (OR, 2.56; 95% CI, 1.19-5.51).

Additionally, most (90.7%) participants believed AI developers were responsible for micro-legal problems caused by AI use, although some oncologists believed this responsibility should be shared by physicians (47.1%) or hospitals (43.1%). Furthermore, 76.5% of participants agreed that it was the oncologists’ responsibility to protect patients from biased AI tools, but only 27.9% were confident in their ability to identify poorly representative AI models.

These findings suggest that most oncologists believe patients don’t need to understand the methodology behind AI decision models, even though most agreed that patients should consent to its use and many tasked patients with choosing between physician- vs AI-recommended treatment regimens.

Therefore, the researchers of this study believe that AI implementation into oncology care needs to include assessments of its impact on care decisions as well as considerations pertaining to treatment decision responsibility.

"It's critical that we assess now, in the early stages of AI's application to clinical care, how it will impact that care and what we need to do to make sure it's deployed responsibly. Oncologists need to be part of that conversation," Hantel said in a following statement.2 “This study seeks to begin building a bridge between the development of AI and the expectations and ethical obligations of its end-users.”

References

1. Hantel A, Walsh T, Marron J, et al. Perspectives of oncologists on the ethical implications of using artificial intelligence for cancer care. JAMA Network Open. Published online March 28, 2024. Accessed March 27, 2024.

2. Study provides a first look at oncologists' views on ethical implications of AI in cancer care. Dana-Farber Cancer Institute. News release. Published March 28, 2024. Accessed March 27, 2024.

Related Videos
Kara Kelly, MD, chair of pediatrics, Roswell Park Oishei Children's Cancer and Blood Disorders Program
Sandra Cuellar, PharmD
Alexander Mathioudakis, MD, PhD, clinical lecturer in respiratory medicine at The University of Manchester
Wanmei Ou, PhD, vice president of product, data analytics, and AI at Ontada
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Klaus Rabe, MD, PhD, chest physician and professor of medicine, University of Kiel
Corey McEwen, PharmD, MS
Klaus Rabe, MD, PhD, chest physician and professor of medicine, University of Kiel
April Armstrong, MD, MPH, chief of dermatology, UCLA
Related Content
AJMC Managed Markets Network Logo
CH LogoCenter for Biosimilars Logo