Publication
Article
Author(s):
In recent months, the topic of artificial intelligence (AI) has taken center stage in legacy and social media and impassioned rants on YouTube. The idea of self-aware programs that learn beyond their original instructions is the stuff of science fiction, suddenly made real. The possibilities are both breathtaking and genuinely frightening. Novel video content, new music, and art are now being generated by AI systems. Some of it is quite interesting. I have been genuinely impressed by a recent series of AI-generated trailers for Star Wars, Avatar, and the Lord of the Rings as reimagined by the idiosyncratic director Wes Anderson. This particular AI seems to have a genuinely twisted sense of humor. Yet amid the wonder, there is cause for concern. From Elon Musk to AI “godfather” Geoffrey Hinton, tech entrepreneurs and experts are warning that self-aware programs pose an existential threat and have proposed a moratorium on further AI development pending rules around the evolution of these technologies. More immediately, programs such as ChatGPT (which has become the fastest-growing AI application with more than 100 million users)1 are creating challenges for educators trying to fight academic fraud.
Leaders in the health care space are not immune to AI’s purported charms. Could it solve challenges such as care matching for precision medicine–based treatments or create big data cancer analytical models? The allure of AI is undeniable. Much in oncology has yet to be learned, so opportunities abound to find more effective treatments or more economically sustainable models of care in an unnavigable morass of electronic health records, pathology reports, genomic testing results, claims data, and unstructured clinical narratives. Undeniably, the idea is enticing that AI could make this knowledge accessible and actionable.
Whether a miracle cure for health care’s ills or a threat to humanity, AI has become our latest shiny new object. Herein lies the danger. As is so often the case, we tend to latch onto shiny new objects––also known as shiny object syndrome––as a panacea for seemingly indomitable pursuits. There is work to be done to address the economics of care delivery, to make it sustainable, but I fear that by placing too much of our fate in the metaphorical hands of these new technologies, we risk never coming to terms with challenges that require action today. Placing our bets on technologies that may never fulfill their potential seems to be a recurrent theme in health care. I fear that AI represents a solution likely to overpromise and underdeliver.
People who need treatments now and clinicians striving to deliver high-quality care cannot wait for sentient machines to figure things out. Solutions to the toughest challenges that we face in oncology today will require a deepening of the human connections between patients and clinicians, as well as enhanced collaboration between policy makers and health care leaders. These human-led efforts are far more likely to effectively address unmet patient need, while ensuring that people from underserved communities will achieve more equitable care outcomes. In this issue of Evidence-Based Oncology, you will read about clinicians and systems leaders who are posing creative solutions to our most pressing cancer care challenges. The future of oncology is here in these pages, not off in the emerging thoughts of some yet-to-be-created AI bot. A cancer journey is a human experience, not a technological one. We cannot technologize ourselves to greater levels of kindness, compassion, or human self-awareness. As tempting as shiny new objects may be, the most promising solutions remain in the creativity and humanity of our patients, clinicians, and health care leaders.
Addressing Cancer Care Challenges to Achieve Optimal Outcomes