News
Article
Author(s):
Based on the comprehensive findings of a review, investigators outline several crucial policy implications, each designed to address the complex issue of bias mitigation in clinical algorithms effectively.
In August 2022, the Department of Health and Human Services (HHS) took a significant step by issuing a notice of proposed rulemaking that aimed to prohibit discrimination against individuals when utilizing clinical algorithms for decision-making processes within covered entities, including health care providers and health plans.
According to a review published in the October issue of Health Affairs, which focuses on systemic racism,2 this move was crucial in addressing growing concerns regarding biases present in these algorithms. However, the HHS did not provide specific guidelines on how these covered entities should effectively prevent such discrimination, leaving the health care industry at a crossroads.1
To shed light on this issue, the comprehensive review was conducted, spanning the literature published from 2011 to 2022. The objective was to identify existing health care applications, frameworks, reviews, perspectives, and assessment tools that focused on detecting and mitigating biases in clinical algorithms, with an emphasis on racial and ethnic biases.
The Scoping Review
The review encompassed 109 articles, comprising 45 empirical health care applications, 16 frameworks, and 48 reviews and perspectives. The diverse sources highlighted a broad spectrum of technical, operational, and systemwide strategies aimed at mitigating biases in clinical algorithms.
Investigators noted an intriguing aspect of the findings was the temporal distribution of the included studies. A staggering 93% (101 studies) of the selected research was published within the past 4 years, underscoring the contemporary relevance and urgency of addressing biases in this area.
This trend indicated a rapid acceleration in research efforts in recent years, reflecting the growing awareness and concern within the scientific community and the broader health care industry about the need to tackle bias in health care decision-making tools.
Finding the Gaps
However, a notable gap was revealed in the findings. In an analysis of 16 studies that presented frameworks for addressing bias in clinical algorithms, only 5 explicitly recognized health equity as a fundamental component of their frameworks. Only 3 of these frameworks underwent testing in real-world clinical settings, highlighting a gap between theoretical constructs and practical implementation.
In the realm of health care applications and tools tested in clinical settings, a more detailed examination exhibited a similar pattern. Among the 45 applications and tools, the majority focused on predeployment bias-mitigation strategies, employing techniques like sampling and weighting methods and regularization techniques. However, minimal evidence was found regarding the deployment of these algorithms in real clinical scenarios—a critical aspect that demands rigorous scrutiny.
Only 1 study presented a mitigation strategy applied in actual clinical practice, again indicating a significant disconnect between theoretical development and real-world application. Additionally, the review identified a notable gap in the documentation of these methods.
Out of the 45 health care applications, only 18 included links to the underlying code implementing the mitigation methods. According to the review, this lack of transparency poses a challenge for the broader scientific community, hindering the replication and validation of these strategies.
Despite the wealth of information gathered, the scoping review revealed a lack of consensus in the literature regarding a universally agreed-upon best practice that covered entities could adopt to meet the requirements set forth by the HHS.
Implications for Policy
Based on the findings, the investigators outlined several crucial policy implications, each designed to address the complex issue of bias mitigation in clinical algorithms effectively.
1. Ensure Professional Diversity: Policy efforts should be directed toward encouraging developers and health care organizations to establish diverse and appropriately trained workforces. This diversity should encompass various areas of expertise, including clinical, social science, and technical knowledge. A multidisciplinary approach is essential to identify and mitigate biases comprehensively.
2. Require Auditable Clinical Algorithms: Regulatory bodies and stakeholders must mandate clear and accurate information disclosure regarding the intended use and risks associated with clinical algorithms. Transparency in algorithmic functions is critical for fostering accountability and ensuring that end users are fully informed about the algorithms they employ.
3. Foster Transparent Organizational Culture: Developers, health care organizations, and academic journals should adopt a transparent approach by openly acknowledging limitations, biases, and the results of mitigation strategies when communicating algorithmic outputs. Transparent reporting is fundamental in promoting awareness and understanding of algorithmic biases.
4. Implement Health Equity By Design: Principles of health equity should be integrated into the very fabric of algorithm development and deployment. Developers must proactively work to mitigate discriminatory effects across all stages, including pre- and post-processing. By embedding health equity into the design process, developers can contribute significantly to reducing biases in clinical algorithms.
5. Accelerate Research: To inform effective bias mitigation strategies, funding agencies and health care organizations should increase research efforts dedicated to studying and developing methodologies that mitigate biases. Expanding the empirical evidence base is vital in making informed decisions and advancing the field of bias mitigation in clinical algorithms.
6. Establish Governance Structures: Policy makers should play a proactive role in encouraging covered entities, such as health care providers and health plans, to establish governance structures and evaluation schemes. These structures are essential in preventing and addressing harms that may arise from algorithmic biases. Regulatory frameworks can provide guidelines for the ethical and equitable use of algorithms in health care decision-making.
7. Amplify Patients’ Voices: Developers must actively engage with diverse patients and local communities in the design and preprocessing stages of clinical algorithms. Incorporating patient perspectives and cultural insights is crucial for developing patient-centered and culturally affirming care. By involving patients and communities, algorithms can be designed to address the specific needs and nuances of various populations, ensuring inclusivity and fairness in health care practices.
Looking to the Future
Moving forward, the findings of this review demonstrated the urgent need for further research. Future studies should focus on identifying optimal methods for mitigating biases in various scenarios, considering the intricate interplay of factors like patient populations, clinical contexts, and specific biases that need addressing.
By tailoring mitigation strategies based on these factors, investigators stated the health care industry can take targeted and effective steps towards ensuring fairness and equity in the application of clinical algorithms, aligning with the goals set forth by the Department of Health and Human Services in their proposed rulemaking.
References
1. Cary MP, Zink A, Wei S, et al. Mitigating racial and ethnic bias and advancing health equity in clinical algorithms: a scoping review. Health Affairs. Published online October 2, 2023. doi:10.1377/hlthaff.2023.00553
2. Grossi G. Health Affairs challenges systemic racism in health care. AJMC. October 2, 2023. https://www.ajmc.com/view/health-affairs-challenges-systemic-racism-health-care
Enhancing Myasthenia Gravis Care With AI-Powered Telemedicine