By Stacey Kusterbeck
Increasingly, clinicians will be using artificial intelligence (AI) tools to inform their decision-making. Many are unclear what, if anything, to tell patients about it. “Clinicians are looking for more concrete and realistic guidance and policies on this issue,” says Meghan Hurley, MA, a clinical research associate at the Center for Medical Ethics and Health Policy at Baylor College of Medicine.
According to the Blueprint for an AI Bill of Rights released by the White House in 2022, people have a right to know about the use of AI that affects them, in ways that are easy to understand.1 However, it is unclear what that means for healthcare providers. “As bioethicists and researchers on the use of AI in the medical setting, we discussed whether the principles articulated in the AI Bill of Rights would translate to healthcare effectively,” says Hurley.
The proposed “right to notice and explanation,” for example, seemed to entail notifying patients that an AI tool has been used, and how it contributed to their clinical care. “But to what extent and what end?” asks Hurley. Hurley and colleagues explored these ethical questions in a recent paper:2
- How in-depth of an explanation should physicians be required to communicate to their patients about an AI tool that was used in their care?
- Is a detailed explanation even possible, given the lack of transparency in AI tools? “Some AI algorithms are ‘black boxes,’ with internal mechanisms that are opaque and uninterpretable to humans,” explains Hurley.
- What does a meaningful explanation look like to patients? What would that entail, in clinical practice?
- Is notification of AI use in patient care enough? Or should a clinician’s explanations meet standards of informed consent?
According to the American Medical Association, both physicians and their patients should have a clear understanding of how AI is used in a patient’s care.3 “But despite increased discussions on the topic, few actual policies are currently in use,” says Hurley. A major ethical concern is how AI tools are going to affect the physician-patient relationship. Many patients are not yet comfortable with the use of AI in their clinical care, even when they are able to acknowledge the potential benefits of AI.4 “Studies show patient preference for human doctors over AI systems, or, if AI tools are utilized, having ‘humans in the loop,’” says Hurley.
How clinicians choose to take accountability for the AI tools that they use in a patient’s care ultimately will affect patient understanding, acceptability, and overall comfortability with that AI tool, says Hurley. Clinicians’ ethical obligations to disclose the use of AI tools to patients still are somewhat unclear. “Clinicians may be obligated to be able to explain treatment to their patients, but the extent of this obligation for AI and its role in treatment has not yet been determined,” Hurley explains. Ultimately, doctors will need to be knowledgeable about the AI tool’s role and contribution to care. Doctors must be able to communicate this information clearly and effectively to their patients.
“Ethicists can help to inform the creation and establishment of guidelines or policies for the communication of AI tools to patients and can have discussions with clinicians about what this could look like in practice,” says Hurley.
A better understanding of what patients actually want to know about the use of AI in their care is needed. Some patients may not consider it to be necessary information for them to have or even desire it at all. Knowing what patients themselves consider to be crucial to know about AI tools used in their care can help to inform clinician-patient conversations. “Further empirical bioethics research is needed to help us better understand the breadth and variety of patient attitudes towards AI in various medical applications. Normative and conceptual work on these issues by ethicists and philosophers is equally important,” says Hurley.
References
1. The White House. Blueprint for an AI Bill of Rights. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/
2. Hurley ME, Lang BH, Kostick-Quenet KM, et al. Patient consent and the right to notice and explanation of AI systems used in health care. Am J Bioeth. 2025;25(3):102-114.
3. American Medical Association. Augmented intelligence development, deployment, and use in health care. Published November 2024. https://www.ama-assn.org/system/files/ama-ai-principles.pdf
4. Riedl R, Hogeterp SA, Reuter M. Do patients prefer a human doctor, artificial intelligence, or a blend, and is this preference dependent on medical discipline? Empirical evidence and implications for medical practice. Front Psychol. 2024;15:1422177.
Increasingly, clinicians will be using artificial intelligence tools to inform their decision-making. Many are unclear what, if anything, to tell patients about it.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content