By Stacey Kusterbeck
Concerns are growing that artificial intelligence (AI) tools are not being adequately evaluated from an ethical standpoint before implementation in healthcare settings. “Ultimately, ethicists must see themselves not just as observers or critics, but as active collaborators in shaping the responsible design and deployment of AI in healthcare,” emphasizes Michael Zimmer, PhD, director of the Center for Data, Ethics and Society at Marquette University.
Researchers studying AI tools often are focused on the technological feasibility of their study proposals. “Of course, this is a major challenge. But the ethical considerations of AI technology are important to consider throughout the lifetime of a project — from the design phase to translation,” says Anna Wexler, PhD, an assistant professor in the Department of Medical Ethics and Health Policy at the Perelman School of Medicine at the University of Pennsylvania. Wexler also is the principal investigator of the Wexler Lab, which studies the ethical, legal, and social issues surrounding emerging technology.
Research on AI tools in healthcare raises complex ethical questions that institutional review board (IRB) members likely have not encountered previously. While privacy also is a concern for other types of research, there are unique privacy risks with AI. “Longitudinal or multimodal passive data collection can create more complex risks for participant privacy,” observes Wexler.
The study protocol may reveal there is lack of transparency on the data used to train the AI model being studied. This can result in algorithmic bias. “Many AI tools operate as ‘black boxes,’ offering little explanation for their outputs. This undermines clinician trust and patient autonomy,” observes Zimmer.
These concerns point to the need for the AI research community to proactively address ethics concerns. Recently, grant applicants at the Artificial Intelligence and Technology Collaboratories for Aging Research program were asked to answer ethics-related questions. The applicants explained how (if at all) they would protect data privacy and confidentiality, address bias, advance health equity, and protect vulnerable participants. “The motivation was to prompt applicants to anticipate and consider ethical challenges associated with their proposed projects,” says Wexler. Wexler and colleagues analyzed the responses and identified areas where applicants would benefit from additional ethics-focused training and resources.1 Applicants largely cited data anonymization as a means to protect privacy, for example. However, anonymized datasets often can be re-identified, especially when linked to other sources.2 “Ideally, more applicants would adopt other means in combination with anonymization, such as secure data storage and training of study personnel, to protect privacy and confidentiality,” says Wexler.
Applicants recognized many different kinds of potential biases with AI tools, including selection bias, biases associated with age (such as expecting older individuals to be less comfortable using technology or to have lower digital literacy), and algorithmic bias. Two-thirds of applicants indicated that they would have additional protections in place for individuals with inadequate capacity to consent to participate in the research. Applicants proposed either to exclude these individuals from research altogether or to seek consent from a surrogate decision-maker. “The success of both strategies is highly dependent on the successful identification of individuals with inadequate capacity. Yet few applicants addressed how capacity would be assessed. This was a striking omission,” observes Wexler.
Ethics expertise can help researchers to address such issues. “Ethicists — especially those of us trained in technology ethics, not just medical ethics — can evaluate the social impact of deploying tools in real-world contexts and identify downstream risks that may be invisible to developers or IT (information technology) teams who often make these purchase and implementation decisions,” says Zimmer. Ethicists are uniquely positioned to raise questions about fairness, justice, and harm. Those kinds of questions typically are not addressed in technical or business evaluations.
“Unlike data scientists or clinicians, ethicists are trained to determine what ought to be done — not just what can be done. This brings a human-centered focus to these decisions. Including ethicists in decisions on how to deploy AI in healthcare settings can help ensure AI tools align with broader principles of care, social responsibility, and institutional trust,” says Zimmer.
Ethicists are ideally integrated right from the start of AI development and procurement processes, rather than being brought in only after a tool has been selected or deployed. “This requires rethinking the traditional role of ethics in healthcare institutions. Institutions that treat ethical analysis as part of their due diligence, alongside technical validation and legal review, are more likely to deploy tools that are socially acceptable and clinically trustworthy,” says Zimmer.
Ethicists should collaborate with AI developers and clinicians to ensure ethical scrutiny of AI systems, argue the authors of a recent paper.3 “The construction, testing, and application of AI tools begs for the engagement of healthcare ethicists. Our paper provides an overview of the ethical implications in the use of AI technology and strategies for addressing those challenges,” says William Nelson, PhD, MDiv, director of the Geisel Ethics and Human Values Program at Dartmouth College.
Currently, many healthcare institutions are actively exploring and/or deploying AI tools for clinical decision support, risk prediction, and operational optimization. “Ethicists should focus on emerging technologies that have not yet undergone thorough review,” recommends Ellison B. Weiner, the paper’s lead author and a member of Dartmouth College’s Center for Precision Healthcare and Artificial Intelligence. Ethical evaluation is needed for these AI tools in particular:
- AI-powered diagnostic imaging tools. “These are gaining traction in clinical practice but carry risks of reinforcing implicit bias due to skewed training data, and raise concerns about accountability for clinical decisions,” says Weiner.
- Algorithms used to automate insurance claims processing. Such tools raise ethics concerns about coverage denials for treatments that healthcare providers believe are medically necessary.
- Generative AI. Since these tools have the ability to produce novel content, there are ethical concerns about regulation, consent, and attribution.
- Ambient AI technologies. AI-powered clinical documentation tools passively capture and transcribe conversations between patients and providers, for instance. Thus, there are ethical concerns about privacy, informed consent, and data governance.
- Tools that predict sepsis, identify patients at high risk for hospital readmission, or assist with triage. “These are often marketed as life-saving innovations. Yet without robust validation across diverse populations, these tools can entrench existing health disparities and lead to unjust treatment decisions,” warns Zimmer.
- Natural language processing systems that analyze physician notes to flag patient risk. “These can raise questions about interpretability, consent, and privacy, especially if they are used to influence clinical decisions without patient awareness,” says Zimmer.
- Large language models used for administrative tasks such as documentation or patient communication. “While these tools promise efficiency, they pose risks around ‘hallucinations’ and misinformation, depersonalized care, and overreliance on possibly flawed generative outputs,” says Zimmer.
- AI-driven systems for workforce management or billing optimization. These may have unintended consequences for patient outcomes. “There is a need for ethical scrutiny that considers not only whether the tools work, but whether they are justified and fair, and for whom,” says Zimmer.
Ethicists can help ensure these technologies are evaluated not only for technical performance, but also their impact on vulnerable communities and public trust in healthcare, says Zimmer. To accomplish this, ethicists can form interdisciplinary committees with key stakeholders at their institutions. Ethicists, along with research an development teams, clinical leadership, and the information technology (IT) department, can discuss any AI tools that are currently being evaluated, asserts Weiner.
Ethicists also can work closely with IRBs to ensure that all of the new ethical concerns involving AI are considered before study protocols are approved. For instance, algorithmic bias, lack of transparency, reuse of data, and patient consent are complex but necessary questions for IRBs to scrutinize. “Validation may include testing against different populations, investigating informed consent, and plans for monitoring algorithmic drift over time,” says Weiner.
Ethicists who want to engage meaningfully with AI in healthcare cannot rely solely on their ethics expertise. Ethicists also need to develop a foundational understanding of how AI systems are built, trained, and deployed. “This doesn’t require becoming a computer science or AI expert. But it does mean becoming conversant in core concepts, such as machine learning, data governance, and algorithmic decision-making,” says Zimmer. Understanding the broader institutional and regulatory landscape (such as the Food and Drug Administration’s oversight of AI tools or patient privacy regulations) can further strengthen an ethicist’s ability to contribute effectively. To gain AI expertise, ethicists can take advantage of the increasing number of workshops, short courses, or degree programs in AI ethics and health informatics.
Ethicists also can participate in cross-disciplinary research teams. “This is another powerful way to build the necessary knowledge and form collaborative relationships,” says Zimmer.
A typical team looking to develop an AI tool might include data scientists, clinicians, and hospital administrators or business leaders. “I’d encourage having an ethicist join the team early on — not just to flag risks later, but to help shape the project’s direction from the onset,” urges Zimmer. Ethicists can raise concerns on how training data for AI tools might reflect racial or socioeconomic bias, and work with the team to ensure the model is validated across diverse patient groups. Ethicists can help craft policies on informed consent for using patient data and ensure that community representatives are consulted about how the tool might affect their healthcare. As part of the evaluation process for the AI tool, ethicists can develop metrics for fairness and transparency to be considered alongside clinical accuracy. “As a result, the project balances technical innovation with ethical responsibility — before the tool is ever implemented,” says Zimmer.
Stacey Kusterbeck is an award-winning contributing author for Relias. She has more than 20 years of medical journalism experience and greatly enjoys keeping on top of constant changes in the healthcare field.
References
1. Largent EA, Kim Y, Karlawish J, Wexler A. Ethics from the outset: Incorporating ethical considerations into the artificial intelligence and technology collaboratories for aging research pilot projects. J Gerontol A Biol Sci Med Sci. 2025;80(6):glaf035.
2. Narayan SM, Kohli N, Martin MM. Addressing contemporary threats in anonymised healthcare data using privacy engineering. NPJ Digit Med. 2025;8(1):145.
3. Weiner EB, Dankwa-Mullan I, Nelson WA, Hassanpour S. Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice. PLOS Digit Health. 2025;4(4):e0000810.
Ethicists increasingly are needed to guide the responsible development and use of artificial intelligence in healthcare. Their expertise ensures fairness, privacy protection, bias reduction, and patient-centered care, bridging gaps often overlooked in technical and business evaluations.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content
Already have an account? Log in