
Promises and Pitfalls of AI in Health Care
Key Takeaways
- AI is revolutionizing healthcare by improving diagnostics, reducing clinician workload, and enhancing care access, particularly in resource-limited settings.
- Challenges include AI bias, transparency issues, and the need for human oversight, necessitating safeguards and accountability in AI deployment.
AI can transform health care by processing vast data, enhancing decision-making, and addressing biases, but needs remain for transparency and human oversight.
The rapid integration of
AI is currently making measurable differences by enhancing image analysis,3 predicting patient risk,4 and tackling the many administrative tasks and documentation fatigue plaguing clinicians and other health care providers.5,6 There is also potential for AI to
The path to wider adoption of AI is laden with significant concerns and obstacles—for example, bias, clinician contribution being overshadowed, and efficiency mistakes—making necessary the placement of safeguards for decision-making, establishing accountability, preserving the human aspect of health care, and validating results. This evolving relationship between clinicians, patients, and AI holds both immense promise for medical advancement and deep concerns about maintaining and preserving an ethical and human foundation of care.
Where AI Is Working and Where It's Stuck
AI systems are adept at processing massive amounts of data and helping to unlock critical insights, thereby significantly improving predictive models, allowing for better patient targeting, enhancing documentation completeness, mitigating burnout, and reducing expenses.8,9 Sanjay Doddamani, MD, MBA, former senior advisor to CMMI and founder and CEO of Guidehealth, who spoke with The American Journal of Managed Care® (AJMC®), also mentioned AI’s ability to make faster, data-driven decisions through the following:
- Processing unstructured data and unlocking insights from patient charts, paid claims, and admission, discharge, and transfer feeds
- Incorporating patient-facing voice and large language models to scale patient outreach, close quality gaps, and extend care teams
- Capturing patient interactions through ambient note taking
- Identifying high-friction workflows
- Measuring time saved in real clinical minutes vs theoretical gains
“AI was almost purpose-built to relieve these burdens. The key is targeted deployment,” he said. “When properly designed, AI should return time to clinicians, not take more.”
Adoption remains slow, however, due to factors that encompass user acceptance, misaligned use cases, high costs, and the health care sector’s natural inclination to move with caution, as well as clinicians requiring confidence that the tools they use are accurate and reliable, going so far as to ask for detailed breakdowns, said Eirini Schlosser, founder and CEO of Dyania Health, a health care AI company that uses natural language processing to help match patients with clinical trials.
“We explain ourselves for every single answer given by pinpointing the part of the electronic medical record that led to a certain conclusion,” said Schlosser, explaining that the rigidity of medical care often leaves little spare time to solve problems or innovate. On the flip side, however, this lends itself to opportunity and progress. “We have seen physicians become ecstatic about the opportunity to study a set of insights in a patient population that they always wanted to examine but never had the time and resources to spend on it.”
The Call to Address Transparency
There also is a great call for better transparency, including safeguards in place to resist premature clinical substitution. Experts agree that at present, AI should inform decisions, not make them outright.
“In scenarios where the decision is of critical nature, we definitely ensure that we have a man-in-the-loop style implementation; for example, in the example of care management, where an AI solution helps us identify individuals who are in highest need of care,” said Reuben Daniel, associate vice president of AI at
Key safeguards for deployment include prospective, side-by-side validation with clinicians and maintaining transparent model provenance. Clear delineation of responsibility is necessary, ensuring the clinician remains the final check.
“Trust depends on humility, using AI to support clinical wisdom rather than replace it,” said Doddamani.
Letters sent to several of the nation’s largest Medicare Advantage insurers are pressing for the same.11 In the inquiries headed by Sen Richard Blumenthal (D, Connecticut), UnitedHealthGroup, Humana, and CVS Health have been asked to clarify what types of AI they are using to make decisions about patient care and coverage and policies instituted since last year “to prevent AI tools from ‘unduly influencing’ the work of human clinicians.”
For developers, strict adherence to these boundaries also is essential. Schlosser explains that Synapsis AI, which is the system Dyania Health uses, does not claim to be a medical device and exists solely to support clinical decisions. “Safeguards must be set up for any AI model, whereby first and foremost the training data and tasks closely mimic the task the model will be asked to complete,” she said, “helping doctors make faster and more confident data-driven decisions.”
The Call to Address Bias
To address AI bias and ensure equitable outcomes, the health care community must actively counter issues stemming from skewed data. There are many bias-related concerns connected to the use of AI in health care. One definition of AI-related bias is “any systemic and/or unfair difference in how predictions are generated for different patient populations that could lead to disparate care delivery.”12
Building off that, there is potential for biases to be inadvertently programmed into AI systems and for that to have a negative impact on patient care, due to the data fed into them.13 Health records, patient demographics, and prioritizing healthier patients from one racial/ethnic group over another, as has happened in several US health systems, have all been implicated.
Research also hypothesizes the prediction of lower risks in certain populations because these groups have less documentation of their health care use, thereby exacerbating health care inequities that have already been apparent for a long time, with experts noting, “It’s as much an issue of society as it is about algorithms.”14
One example in particular, according to Harvard research, was the Framingham Heart Study cardiovascular risk score, which produced unbalanced results between African American and Caucasian patients. Another is the use of racially biased algorithms to predict health care costs vs illness, knowing that Black patients are sicker compared with White patients, when evaluated against a given risk score.15 Not taking into consideration social determinants of health has also been implicated, especially data explaining income levels, education, and environment.16
“To make AI fairer and more equitable, the health care community needs to use more diverse and complete datasets, test bias regularly, and include experts from different fields, like clinicians and data scientists, in the process,” Schlosser noted. “Equity shouldn’t be an afterthought. It should be built into every step of how health care AI is designed, tested, and applied.”
Doddamani agrees. “Data breadth has improved, but bias persists if we fail to measure it,” he said. “Equitable AI requires deliberate monitoring, not passive confidence in large data sets.
Conclusions
The stakes for responsible innovation have never been higher. AI has already proved its value by enhancing efficiency, reducing clinician burden, and improving how patients are identified for timely interventions, but there is always room for improvement. Progress must advance with vigilance. Concerns around algorithmic bias, loss of human oversight, and transparency gaps make clear that trust is earned—not assumed. With safeguards, continuous validation, and clear communication about how AI models reach conclusions, confidence can grow.
The balance moving forward will require honoring both sides of the equation—technical excellence and ethical responsibility. When aligned thoughtfully, AI and human expertise can work in partnership, ensuring innovation improves outcomes while preserving the compassion and clinical judgment at the heart of health care.
Newsletter
Stay ahead of policy, cost, and value—subscribe to AJMC for expert insights at the intersection of clinical care and health economics.
















































