Publication|Articles|January 27, 2026

The American Journal of Managed Care

  • January 2026
  • Volume 32
  • Issue 1
  • Pages: e18-e24

Building Trust: Public Priorities for Health Care AI Labeling

A Michigan-based deliberative study found strong public support for patient-informed artificial intelligence (AI) labeling in health care, emphasizing transparency, privacy, equity, and safety to build trust.

ABSTRACT

Objectives: Labeling and the use of model cards have been promoted as ways to increase transparency for multiple end users. This study aimed to identify key content for a health artificial intelligence (AI) tool label based on public perspectives and expectations.

Study Design: We used a mixed-methods study design, combining public deliberation and pre-/post surveys to inform participants about AI in health care and gather input on key information for a health AI tool label.

Methods: In 2024, we conducted 5 virtual community deliberations across Michigan, engaging 159 participants in facilitated small-group discussions that were qualitatively coded. Participants completed a 20-minute survey before and after the deliberation to assess changes in knowledge, attitudes, and trust regarding AI in health care.

Results: Participants prioritized information regarding privacy and security, health equity, and safety and effectiveness of AI tools for inclusion on a health AI tool label. An AI label is, therefore, a familiar and transparent mechanism to build trust and address patients’ desire for notification.

Conclusions: The findings highlight ethical gaps in using AI in health care settings and the value of publicly informed, patient-centered solutions. There is strong demand for clear, accessible information on how AI tools are used and their risks and benefits. A patient-informed label may address these ethical challenges and improve transparency, trust, and patient-centered communication as AI reshapes health care.

Am J Manag Care. 2026;32(1):e18-e24. https://doi.org/10.37765/ajmc.2026.89875

_____

Takeaway Points

Patients want transparency when artificial intelligence (AI) tools are used in their health care. We identified strong public support for a clear, accessible label that outlines benefits, risks, and equity implications of how AI is used.

  • An AI label is a familiar and transparent mechanism to build trust and address patients’ desire for notification.
  • Participants prioritized transparency around privacy and security, safety and effectiveness, and health equity.
  • Employers and health systems can use these findings to demand clearer AI disclosure from vendors and prioritize ethically deployed, patient-centered tools in health care delivery.

_____

Despite the exponential growth of artificial intelligence (AI) tools in health care, patients often lack clear, evidence-based information on how AI tools impact their care.1 Recent studies found that 66% of US adults expressed low trust in their health care system’s ability to use AI responsibly and 58% doubt that their health care system will protect patients from potential harm caused by these tools.2 A majority of the US adult public (63%) also reports wanting to be notified whenever AI is used in their care.3 Despite calls for more transparent communication tailored to patients’ needs, significant gaps remain in patient-facing information about AI tools in clinical settings.4-8

In response to these gaps, recent policy initiatives have increasingly emphasized measures that prioritize sharing of information about AI tools. Notably, in 2023, the HTI-1 final rule containing the Decision Support Interventions requirements, issued by the Assistant Secretary for Technology Policy (formerly the Office of the National Coordinator for Health Information Technology), highlighted the necessity of delivering patient-facing information that is comprehensive, accessible, and understandable to all patients.9 Likewise, the FDA has underscored its intentions to promote transparency and to address bias through efforts to collect and evaluate evidence that AI-driven medical devices benefit patients across diverse demographic groups,10 and the American Medical Association adopted a policy advocating for clinical AI tools to provide safety and efficacy data and clear, interpretable explanations for clinicians.11

One mechanism for sharing information about AI is the use of model cards or labels. Labeling has long been a public health policy tool to provide key information and a form of “tiered notification” to inform consumers about the intended use and associated risks of products (eg, food, cigarettes, medication), displaying critical information more prominently and additional information less prominently.12,13 For example, drug labels prominently display high-level information about dosage, delivery methods, and adverse events in a highlighted box, whereas more detailed information may be found in less prominent sections or in supplementary materials, such as a foldout insert or pharmacy information leaflet.

In health care, the FDA has routinely relied on labeling as “the most important means to ensure that consumers have access to important warning information each time a drug product is purchased and used.”14 Similar to drug labels, product labeling could be applied to AI tools to promote transparency and better inform patients about how tools are used in their care.15 Although a handful of efforts to create AI labels are emerging, most treat the clinician as the end user and have not engaged patients nor used research methods to understand or prioritize patient needs and interests.8,16-18

The overall purpose of this study was to address the gap in understanding public perspectives on ethical best practices for AI-enabled clinical decision support (AI-CDS) and how these tools affect trust in health professionals and institutions. We examined product labeling as a form of notification that can improve transparency, and enhance accountability, within the AI-CDS ecosystem. Specifically, our study aimed to identify priorities for the types of information patients would like to see on a health AI tool label, and our guiding research questions were as follows: (1) What ethical and practical concerns do patients have regarding the use of AI tools in their health care? (2) What types of information do patients most value seeing disclosed about AI tools through labeling? (3) How might these disclosures influence patients’ trust in health professionals and health systems?

METHODS

Recruitment

The study team obtained informed consent from participants through an online form and verbal consent. A copy of the consent form was also mailed to all participants. This study was reviewed by the University of Michigan Institutional Review Board and was deemed exempt from federal regulations (HUM00240942).

We conducted 5 virtual community deliberations across Michigan in 2024 to educate and gather insights from residents (N = 159) about the use of AI in health care and the types of information that could be summarized and included on a health AI tool label. Deliberative methods provide in-depth education and discussion to enable community-based, informed decision-making and have previously been used to identify and inform priorities for health policy and practice.19-22 We selected this method given the complexity of the topic and to be able to provide sufficient information and time for robust discussion. Participants were recruited across Michigan to capture a diverse set of backgrounds. We used the University of Michigan Health Research website, a research platform maintained by the University of Michigan’s Institute for Clinical and Health Research to recruit residents of Southeast Michigan (Ypsilanti/Ann Arbor area). In addition, we also used community-engaged strategies, partnering with local organizations to recruit community members, including groups historically underrepresented in health information technology and health research. Through these efforts, we recruited residents from a range of communities across Michigan, including a public housing community in Detroit, Middle Eastern/North African (MENA) communities in or near Dearborn, rural areas in Northern and Central Michigan, and Southwest Michigan (Grand Rapids area). To be eligible, participants had to be at least aged 18 years, be fluent in English, reside in Michigan, and have access to an electronic device and internet to participate in a virtual deliberation. Participants received $200 for their full participation in this study.

Deliberation Materials

Approximately 2 weeks before each deliberation session, participants were mailed a packet that included an educational booklet, instructions for Zoom setup, a copy of the presentation slides, a copy of the consent form, and a session agenda. Deliberation materials were iteratively developed by our study team alongside input from multiple stakeholders, 2 focus groups of the lay public, and subject matter experts. The educational booklet and expert presentations provided a general overview of public deliberation, AI and its possible applications in everyday life and health care, real-world case studies, and the ethical considerations of the use of AI in health care.

Data Collection

Participants attended a 5.5-hour virtual deliberation session conducted via Zoom. We administered pre- and postdeliberation surveys to gather demographic information and to assess knowledge and attitudes about AI and participants’ trust in health care. The surveys included a dot voting exercise in which participants were given 21 points (“dots”) to allocate across 10 options in response to the prompt, “For the AI tool, I want to know…” This exercise was designed to help prioritize the types of information that participants want on a health AI label. The 10 options were developed through a combination of deductive methods—drawing on policy documents such as the White House Blueprint for an AI Bill of Rights23 and the National Institute of Standards and Technology Risk Management Framework24 as well as literature on model cards and labeling for developers25-28—and inductive methods based on input from 29 stakeholder interviews with subject matter experts and possible end users (patients and clinicians) (Figure 1). During these interviews, we found that emphasizing a range of use cases over technical details of AI was more meaningful, particularly for people without expertise in AI such as patients and clinicians, which led us to define AI in the deliberations as a technology that “uses large amounts of data to process information to make predictions, automate processes, or help people make decisions.”

Each deliberative session included educational presentations, and 2 facilitated small-group breakout discussions of 6 to 8 participants. In the first breakout, participants discussed their hopes and concerns related to the use of AI in health care. In the second breakout, participants used the same 10 options from the presurvey and were asked to individually prioritize these 10 options into 3 categories: most important (2 items), important (3 items), and other information (5 items). Although the 10 items were predefined, participants were encouraged to take notes and interject new or missing items in questions they were prompted with after the labeling activity. Following best practices for deliberative sessions, participants discussed their selections with one another after completing their individual prioritization. As a final step, each group created a label based on consensus.19 After each group completed this task, participants were reconvened as a large group to present their final small-group labels.

Data Analysis

Descriptive statistics from the survey were calculated for respondents’ demographic characteristics. Likert-scale ratings were used to measure participants’ evaluations of the group discussion. Zoom recordings of the small-group discussions were transcribed verbatim and deidentified for analysis. We developed our initial codebook using both deductive and inductive approaches, starting with a coding scheme based on the small-group session questions.27-29 This scheme was refined iteratively through review and coding of all 5 deliberations. We obtained 48 transcripts across all the deliberation sessions. Small-group discussions addressing priorities for labeling lasted approximately 1 hour. To capture a range of perspectives, we selected a nonrandom sample of 20 transcripts (4 from each deliberation). The sampling frame for selecting the 20 transcripts is included in the eAppendix (available at ajmc.com). Two study team members (M.L.S., K.A.R.) independently coded the transcripts and consolidated coding via consensus meetings. Final thematic analysis of the transcript sample was led by an experienced qualitative researcher (K.A.R.) with input from the study team to identify important themes and relevant quotations from the data. After coding the 20 sessions, we concluded that we had reached thematic saturation.

We aggregated the postdeliberation survey dot voting results (Figure 2) across all 5 deliberative sessions, summing the total number of points allocated to each label component, to develop a final prototype label (Figure 3). Here we present the findings from the small-group discussions that describe the rationale for prioritizing certain types of information for an AI label.

RESULTS

Participants (N = 159) were mostly female (65%) with a mean age of 46 years. The racial and ethnic composition included 35% African American or Black, 33% White, and 21% MENA. Additionally, 57% of participants reported incomes below $75,000, and nearly half (47%) resided in areas classified as high social vulnerability as measured by the Social Vulnerability Index (SVI).30 The Table reports the overall demographic characteristics of the participant sample.

In response to postdeliberation survey questions about health systems and AI, 94% of participants assessed the statement that health systems should inform patients about their use of AI tools as fairly or very true (on a 1-4 scale of not true, somewhat true, fairly true, and very true). In addition, 85% of participants rated the statement “It is important that I know who has my health information” as fairly or very true.

General Reflections on a Health AI Tool Label

Throughout the deliberations, participants noted the importance of the label in being able to “make an educated decision and know if this is right for you” (democratic deliberation 1, breakout room 5 [DD1_BR5]). Prior to the voting and deliberation, participants reflected on the importance of trust. They noted that trust can be earned through making “the information more accessible to people and more understandable” (DD3_BR1). Overall, participants were optimistic that the AI label could enhance transparency and understanding for patients regarding the use of AI in their health care, with one noting that “when you increase people’s awareness on this topic, then you’re promoting trust and you’re making it easier for people to engage in this subject and for them to be able to ask informed questions and just have a constructive dialogue with the health care provider regarding this topic” (DD3_BR1).

Information Priorities for a Health AI Tool Label

Postdeliberation survey voting indicated that participants’ top priority for an AI tool label was knowing “how my privacy is protected” (500 total votes). This was followed by whether “the AI tool works for all patients regardless of gender, race, ethnicity, age, or disability status” (439 votes), whether “the AI tool meets industry standards for safety and effectiveness” (432 votes), “how the AI tool is used in my care” (422 votes), and whether “the AI tool improves health” (410 votes) (Figure 2).

To better understand how participants weighed various priorities and preferences, we analyzed the themes of privacy and security, health equity, safety and effectiveness, application, and implications for health outcomes.

Privacy and Security

Participants who emphasized the importance of privacy and security measures shared sentiments around being “able to trust where [their] data is getting sent and who’s getting access to that” (DD3_BR3). In other words, participants were concerned about what, if any, protocols were in place to protect their data when their doctor or care team utilizes AI tools in their care. One participant expressed concerns about data breaches they had experienced and noted that “anything we could do to decrease the risk would help with me feeling more confident with it” (DD5_BR1).

Health Equity

Participants also indicated that AI tools needed to work effectively and fairly for everyone, regardless of gender, race, ethnicity, age, or disability status. They recognized that “certain communities are not well represented at this point” and that there “could be more risk for certain communities because there’s not as much participation” (DD1_BR5). This alludes to the lack of representation in health databases or health research, especially for patients from marginalized or underrepresented backgrounds.

Safety and Effectiveness

Participants expressed that AI tools need to be regulated to build trust around them. One participant explained that regulations must be robust, stating that “companies always find loopholes in regulations,” but acknowledged that “if the standards are met, then I will be feeling safe” (DD3_BR4). Ultimately, participants agreed that “if it’s not safe or it’s not effective—like if it doesn’t work as good as the old-school way of doing something…there’s no point in using it” (DD2_BR5), underscoring the need for safety and effectiveness standards before AI tools are applied in health care settings.

How AI Is Used

Participants were interested in details on how the AI tool would be integrated into their care. They asked questions such as, “What role is the doctor playing in my care?” “What role is the AI participating in?” and “How much are they [the AI tools] facilitating vs what the doctor is doing?” (DD5_BR1). One participant likened the experience with the disclosure of information to a dentist appointment by saying, “It’s kind of like when you’re going to the dentist. ‘Tell me what you’re going to put in my mouth next.’ You know, how is this going to work?…Knowledge is power. So then it either is going to calm you down, or it’s going to cause anxiety. And then you can say, ‘Well, OK, do this, or don’t do that.’ But yeah, what are the risks? What are the benefits?” (DD4_BR3).

Whether AI Improves Health

Participants generally agreed that knowing whether an AI tool improved health was important and that if a tool did not improve health, it likely had no reason to be used in a health care setting. Some participants questioned what it would mean for health to be improved through various lenses, such as, “I looked at ‘improves health’ [as] meaning all-encompassing: Did it improve my convenience to access of health? As a byproduct, will my outlook toward my personal health improve? Was it easier for me? Was it more convenient? If it was ineffective, inconvenient, laborious, then obviously I would rate that as not improving my health, and then everything else wouldn’t matter, even though all these things are very important. If it didn’t do that, then...why are we talking about it, right?” (DD1_BR2). Another participant simply said: “I’d [want to] know [whether] it’ll improve health. Otherwise, what’s the point?” (DD2_BR5).

Although participants were limited in the dot voting exercise to the 10 previously mentioned items, many emphasized that all the items were important and found it difficult to choose just a few as top priorities, despite 5 emerging as the most highly rated. Participants suggested additional items that were not on the prespecified list, such as the funder of the AI tool, any impact on health care costs for patients, liability, whether the developer has a conflict of interest, and how the tool compares to traditional care by a provider.

DISCUSSION

Participants in our study prioritized privacy and security, equitable performance across demographic groups, and safety and effectiveness of AI tools. These priorities are consistent with policy directives and underscore how this information can be effectively presented to patients in a familiar label format.9,10

The way participants ranked different types of information reflects what patients see as most important in their care and highlights key areas for engagement. First, many participants assumed that the safety and effectiveness of AI would be thoroughly evaluated before AI tools were used in health care, so they placed less emphasis on these details on the label. However, current policy and implementation strategies do not always support this assumption.4-11,16-18 To promote patient trust, this information should be included on AI labels. Second, participants indicated that transparent privacy information was needed to address mistrust and concerns about data misuse. Third, past experiences of many participants with discrimination and existing health disparities drive the desire for information about bias and equity in expected outcomes. Finally, participants wanted basic information about what AI tools are being used in their care and for what purposes.

Previous work has found that patients do want to be notified about AI use.3 Our survey supports this statement by indicating that 94% of our participants agree that health systems should inform patients about their use of AI tools. Despite this strong appreciation for notification, what specific information patients want to know about AI’s use in their care is less understood. We found the label to be a resonant mode of disclosure. Further, our findings suggest focal areas for labeling standards for AI tools used in health care settings.16 They further indicate that an AI tool label may alleviate patient concerns, provide accountability, and increase trust in both AI tools and health systems using AI tools. The label developed in this study responds to the increasing need to ensure safety and prevent diminishing trust and confidence in health care.5

In supplementary analysis, our data also provide indications of how and when a label might be implemented. For example, one participant noted, “I’d want to see it everywhere: on the app, in the doctor’s office, in the waiting room” (DD5_BR1). Other respondents noted that patient portals, emails, text messages, an application specific to AI labels, and QR codes on printed materials such as pamphlets would be helpful to have access to the information both before and after the appointment. These suggestions are consistent with our previous studies on notification preferences about health information sharing.31 Participants in the deliberations also raised concerns over the length of information that could be presented that may bar them from thoroughly reading the materials to give consent for use of the tool and pointed out that not all patients use their portal and could miss out on notification of tool use.

Limitations

This study had several limitations. As a qualitative study, our findings have limited generalizability but suggest hypotheses for future quantitative evaluation of what people prioritize in a patient-facing AI label. Purposeful oversampling of diverse communities means that the results are not representative of the general Michigan population. Participants were making decisions about what information to include in labels based on the education we provided and a predetermined list of 10 items. We aimed to present materials that were neutral in tone, presenting both benefits and risks associated with the use of AI, yet we also recognize that unintentional biases may have influenced the discussions. As described in the Methods, we encouraged people to raise additional issues they thought would be important in patient communications about AI. There was also a gender imbalance in participation, and there is some evidence that women and men have different attitudes about the use of AI in health care.32 The use of consensus-building and small-group discussion, however, helped us ensure that both women’s and men’s perspectives were reflected in the final recommendations. Additionally, participants were required to spend a considerable amount of time in the deliberation, suggesting a high level of engagement that may not be reflective of the general public.

Future Research

Future research should include surveys and scale the deliberative method to a nationally representative US audience, ensuring geographic, demographic, and cultural diversity. Expanding the reach of this approach to youth and older adults will contribute to a more nuanced and comprehensive understanding of patient values, preferences, and concerns to better inform the ethical development, implementation, and governance of AI tools in the clinical setting. Future research should expand insights into how patients in different communities perceive and prioritize the use of AI tools in their health care. As AI rapidly evolves, ongoing research is needed to better understand patient attitudes and preferences over time. Finally, given the scope of tools ranging from medical devices to administrative applications, additional studies should investigate different approaches to labeling and communicating with the public about how these technologies are used in health care.

Conclusions

This study addresses a critical gap between patient expectations, notification preferences, and current communication practices related to health AI. Our findings indicate that patients not only expect hospitals to disclose the use of AI tools but also to have access to clear, specific, and comprehensible information.

Five core content areas emerged as critical for inclusion on AI labels: (1) privacy and security, (2) assurance that the tool works equitably across demographic groups, (3) evidence of safety and effectiveness, (4) information on how the tool is used in clinical care, and (5) whether the tool improves health outcomes. Although prior frameworks, such as model cards, have emphasized transparency, few have systematically integrated patient perspectives into their design. This study advances the field by centering patient voices through a community-engaged approach to label development.

The resulting AI tool label (Figure 3) reflects the collective concerns and expectations of our participants and provides a structured, patient-informed model for communicating AI tool use in health care. By grounding this label in the lived experiences of Michigan residents, we offer a scalable and adaptable strategy to promote transparency, foster trust, and support patient autonomy in the evolving landscape of health AI.

Acknowledgments

The authors would like to acknowledge and thank our research participants and community partners, whose insights and contributions are essential to shaping this work. They thank Reema Hamasha for her coordinating and recruitment efforts. They would also like to thank the dedicated small-group facilitators for their thoughtful engagement in each conversation: Philip Barrison, Ariella Hoffman-Peterson, Kera Luckritz, Josh Richardson, Dalya Saleem, and Renée Smiddy.

Author Affiliations: Department of Learning Health Sciences (MLS, JP, ST) and Center for History, Humanities, Arts, Social Sciences, and Ethics in Medicine (KAR), University of Michigan Medical School, Ann Arbor, MI; Division of Health Policy & Management, University of Minnesota School of Public Health (PN), Minneapolis, MN; Department of Epidemiology, University of Michigan School of Public Health (SLRK), Ann Arbor, MI.

Source of Funding: The authors are grateful for the support of a grant from the National Institutes of Health National Institute of Biomedical Imaging and Bioengineering: Public Trust of Artificial Intelligence in the Precision CDS Health Ecosystem (grant 1-RO1-EB030492).

Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.

Authorship Information: Concept and design (MLS, JP, PN, SLRK); acquisition of data (MLS, JP, KAR, SLRK); analysis and interpretation of data (MLS, JP, ST, KAR, PN); drafting of the manuscript (MLS, ST); critical revision of the manuscript for important intellectual content (MLS, JP, ST, KAR, PN, SLRK); statistical analysis (MLS, ST); provision of patients or study materials (MLS); obtaining funding (JP, SLRK); administrative, technical, or logistic support (MLS); and supervision (JP).

Address Correspondence to: Morgan L. Sielaff, BS, Department of Learning Health Sciences, University of Michigan Medical School, 2800 Plymouth Rd, North Campus Research Complex, Bldg 14, Room G016, Ann Arbor, MI 48109. Email: morgsiel@umich.edu.

REFERENCES

1. Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4(1):140. doi:10.1038/s41746-021-00509-1

2. Nong P, Platt J. Patients’ trust in health systems to use artificial intelligence. JAMA Netw Open. 2025;8(2):e2460628. doi:10.1001/jamanetworkopen.2024.60628

3. Platt J, Nong P, Carmona G, Kardia S. Public attitudes toward notification of use of artificial intelligence in health care. JAMA Netw Open. 2024;7(12):e2450102. doi:10.1001/jamanetworkopen.2024.50102

4. Fehr J, Citro B, Malpani R, Lippert C, Madai VI. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health. 2024;6:1267290. doi:10.3389/fdgth.2024.1267290

5. Gilbert S, Adler R, Holoyad T, Weicken E. Could transparent model cards with layered accessible information drive trust and safety in health AI? NPJ Digit Med. 2025;8(1):124. doi:10.1038/s41746-025-01482-9

6. Heming CAM, Abdalla M, Mohanna S, et al. Benchmarking bias: expanding clinical AI model card to incorporate bias reporting of social and non-social factors. arXiv. Preprint posted online July 2, 2024. doi:10.48550/arXiv.2311.12560

7. Mitchell M, Wu S, Zaldivar A, et al. Model cards for model reporting. In: FAT* ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery; 2019:220-229. doi:10.1145/3287560.3287596

8. Sendak MP, Gao M, Brajer N, Balu S. Presenting machine learning model information to clinical end users with model facts labels. NPJ Digit Med. 2020;3:41. doi:10.1038/s41746-020-0253-3

9. Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule. Assistant Secretary for Technology Policy. Updated March 7, 2024. Accessed July 18, 2025. https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program

10. Artificial intelligence-enabled device software functions: lifecycle management and marketing submission recommendations; draft guidance for industry and Food and Drug Administration staff; availability. Fed Regist. 2025;90(4):1154-1156. Accessed July 18, 2025. https://www.federalregister.gov/documents/2025/01/07/2024-31543/artificial-intelligence-enabled-device-software-functions-lifecycle-management-and-marketing

11. AMA adopts new policy aimed at ensuring transparency in AI tools. News release. American Medical Association. June 11, 2025. Accessed July 18, 2025. https://www.ama-assn.org/press-center/ama-press-releases/ama-adopts-new-policy-aimed-ensuring-transparency-ai-tools

12. Gostin LO, Wiley LF. Public Health Law: Power, Duty, Restraint. 3rd ed. University of California Press; 2016.

13. Thaler RH, Sunstein CR. Nudge: Improving Decisions About Health, Wealth, and Happiness. Penguin; 2009.

14. King JP, Davis TC, Bailey SC, et al. Developing consumer-centered, nonprescription drug labeling: a study in acetaminophen. Am J Prev Med. 2011;40(6):593-598. doi:10.1016/j.amepre.2011.02.016

15. Richardson L. Why FDA must increase transparency of medical devices powered by artificial intelligence. Pew. February 18, 2022. Accessed July 31, 2025. https://pew.org/3H4S1dw

16. Gerke S. “Nutrition facts labels” for artificial intelligence/machine learning-based medical devices—the urgent need for labeling standards. George Washington Law Rev. 2023;91(1):79-163.

17. Magrabi F, Ammenwerth E, McNair JB, et al. Artificial intelligence in clinical decision support: challenges for evaluating AI and practical implications. Yearb Med Inform. 2019;28(1):128-134. doi:10.1055/s-0039-1677903

18. Applied model card. CHAI. Accessed August 1, 2025. https://www.chai.org/workgroup/applied-model

19. Raj M, Ryan K, Nong P, et al. Public deliberation process on patient perspectives on health information sharing: evaluative descriptive study. JMIR Cancer. 2022;8(3):e37793. doi:10.2196/37793

20. Bosché M, Krust R, Fung A, Pawar AS. Exploring democratic deliberation in public health: bridging division and enhancing community engagement. Am J Public Health. 2025;115(4):500-505. doi:10.2105/AJPH.2024.307998

21. Carman KL, Mallery C, Maurer M, et al. Effectiveness of public deliberation methods for gathering input on issues in healthcare: results from a randomized trial. Soc Sci Med. 2015;133:11-20. doi:10.1016/j.socscimed.2015.03.024

22. Waljee AK, Ryan KA, Krenz CD, et al. Eliciting patient views on the allocation of limited healthcare resources: a deliberation on hepatitis C treatment in the Veterans Health Administration. BMC Health Serv Res. 2020;20(1):369. doi:10.1186/s12913-020-05211-8

23. Blueprint for an AI Bill of Rights. The White House. Accessed July 18, 2025. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/

24. AI risk management framework. National Institute of Standards and Technology. Accessed July 18, 2025. https://www.nist.gov/itl/ai-risk-management-framework

25. Fehr J, Citro B, Malpani R, Lippert C, Madai VI. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health. 2024;6:1267290. doi:10.3389/fdgth.2024.1267290

26. Gilbert S, Adler R, Holoyad T, Weicken E. Could transparent model cards with layered accessible information drive trust and safety in health AI? NPJ Digit Med. 2025;8(1):124. doi:10.1038/s41746-025-01482-9

27. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77-101. doi:10.1191/1478088706qp063oa

28. Saldaña J. An introduction to codes and coding. In: Saldaña J. The Coding Manual for Qualitative Researchers. SAGE; 2009:1-31.

29. Thorne S. Interpretive Description: Qualitative Research for Applied Practice. 2nd ed. Routledge; 2016.

30. Social Vulnerability Index. CDC. July 22, 2024. Accessed July 30, 2025. https://www.atsdr.cdc.gov/place-health/php/svi/index.html

31. Raj M, Ryan K, Amara PS, et al. Policy preferences regarding health data sharing among patients with cancer: public deliberations. JMIR Cancer. 2023;9(1):e39631. doi:10.2196/39631

32. Tyson A, Pasquini G, Spencer A, Funk C. 60% of Americans would be uncomfortable with provider relying on AI in their own health care. Pew Research Center. February 22, 2023. Accessed December 10, 2025. https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/

Newsletter

Stay ahead of policy, cost, and value—subscribe to AJMC for expert insights at the intersection of clinical care and health economics.


Brand Logo

259 Prospect Plains Rd, Bldg H
Cranbury, NJ 08512

609-716-7777

© 2025 MJH Life Sciences®

All rights reserved.

Secondary Brand Logo