By Stacey Kusterbeck
Prognosis is an important factor guiding decision-making on end-of-life care, and artificial intelligence (AI)-based prognostic tools are increasingly being integrated into clinical workflows. “These tools present important ethical considerations that deserve our careful attention,” says Matthew G. Hanna, MD, vice chair of pathology informatics at the University of Pittsburgh Medical Center.
AI prognostic tools offer the potential for improved risk stratification and clinical decision-making. However, the tools also raise ethical concerns on informed consent, fairness, and transparency. “One of the key issues is ensuring that clinicians critically assess the accuracy and appropriateness of any AI tool used in patient care. This is very commonplace in pathology; however, other medical specialties are not required to do so in the same regard,” observes Hanna.
AI prognostic tools must be validated in the specific context in which they are applied. “Clinicians should remain aware of potential biases, especially those that could affect underserved or vulnerable populations,” says Hanna.
Informed consent also is a central concern with AI prognostic tools. Patients should be made aware when AI is influencing decisions about their care, particularly if prognostic assessments could affect end-of-life planning. “We need to ensure patients remain at the center of decision-making and that they understand how these tools work in broad terms,” says Hanna.
Even as AI tools support decision-making, they do not absolve clinicians of responsibility. “Human judgment and ethical reasoning must remain primary,” says Hanna. Hanna advocates for AI as augmented intelligence, where there are physicians in the loop using the outputs of the AI models as yet another data point. Clinicians would consider that data point in conjunction with all of the other data points to support their clinical decisions. Hospital ethicists can play a meaningful role in ensuring ethical use of AI prognostic tools with these practices, says Hanna:
- helping to evaluate AI prognostic tools before implementation;
- providing guidance to clinicians on how to ethically integrate AI prognostic tools into their clinical practice; and
- helping to formulate policies on transparency, documentation, and patient communication regarding the use of AI prognostic tools.
Many hospitals and health systems already are rolling out AI-based prognostic or mortality prediction scores. “Ethicists absolutely must be involved. They are uniquely sensitive to issues of choice and fairness, and that input is critical to identifying and managing ethical issues at the local level,” says Matthew DeCamp, MD, PhD, director of the research ethics program at the Colorado Clinical and Translational Sciences Institute and associate professor in the Center for Bioethics and Humanities at University of Colorado Anschutz Medical Campus.
Privacy is just one example of the ethics issues that come up with AI prognostication. “Should everyone have access to this score in the medical record, or just palliative care? Ethicists must weigh in on these questions,” says DeCamp.
DeCamp and colleagues saw a need to create data-driven ethics guidance for use of AI prognostic tools. The researchers interviewed 45 palliative care providers at four academic medical centers.1 Overall, the study participants conveyed the view that AI-based prognostication was a form of “screening” for end of life. Based on the interviews, the researchers identified four principles to guide the usage of AI-based prognostic tools: It should be evidence-based, it should take opportunity cost into account, it should fairly distribute costs and benefits, and it should offer respect for persons and their dignity.
For clinicians, it is important to remember that prognosis is only one part of the decision-making equation, says Kathryn Huber, MD, the study’s lead author. “In order to respect choice and individual dignity, clinicians should be sure not to over-focus on just this risk score,” she says. “Clinicians should keep quality of life, family, and spiritual needs in mind — technology sometimes tempts us otherwise.”
A second key lesson for clinicians is to pay close attention to whether the model used was trained on data reflecting their patient population. Some models are more accurate for certain patient groups in specific circumstances (such as having a diagnosis of cancer) compared to others. “These tools can also be used to prompt us as clinicians to have earlier conversations with patients about goals and end-of-life care preferences to reach a broader population. Likely, more patients will be offered palliative care support than [they] would have in the past,” says Huber.
AI tools make information on prognosis available to anyone coming into contact with patients within a given health system. In light of this, clinicians from all specialties can take the opportunity to hone their primary palliative care skills in treating patients with a variety of chronic conditions and serious illness. “Balancing these increased opportunities with limited access to palliative care specialists, especially in rural or underserved areas, is one challenge health systems will have to start thinking strategically and proactively about,” says Huber.
Considering all the ethical concerns surrounding AI in clinical care, ethicists can expect to face these issues during future consults. “We have not encountered any cases at our institutions, and we have a fair amount of AI systems in place. But issues may be coming up and are just not making it into consults,” says Benjamin X. Collins, MD, MS, MA, an assistant professor of clinical medicine at Vanderbilt University Medical Center.
Collins and colleagues searched for cases involving AI in ethics consults, intending to do an analysis of the issues that were coming up.2 The researchers found no documented cases. Instead, they developed these hypothetical cases to illustrate the ethical issues most likely to come up during consults:
- A patient with advanced cancer wants to prolong life by a specific time period. The oncologist consults with an AI tool to determine the best regimen to achieve the patient’s goal. The AI tool recommends an uncommon chemotherapy regimen, but it is unclear what information it used to come up with this unexpected recommendation. For clinicians, the ethical question is whether the patient should be presented with the AI’s recommendation as a treatment option, given the fact that the reasoning behind the recommendation cannot be explained.
- A primary care physician disagrees with an AI tool’s recommendation that a patient is at high risk for suicide and should be placed on a psychiatric hold. The hospital policy states that the doctor ultimately is responsible for the decision, and the doctor worries about liability risks if the AI recommendation is not followed. “Are you going to make an important medical decision just based off of an AI system? We generally don’t think of computer tools as we would a person [who] has any agency in a decision,” says Collins.
- Two physicians disagree about a diagnosis. The AI system supports one diagnosis. This raises the question about how heavily the AI recommendation should be weighed. “If an AI system is used to provide data in addition to those two clinicians, and it agrees with one and disagrees with the other, it’s unclear how that should be interpreted,” says Collins. “Is it like any other data point? Or does it break a tie in terms of what decision to go with?”
Even if there is not a serious ethical conflict that causes physicians to request an ethics consult, many AI-related ethical questions are coming up commonly for clinicians. Physicians often may wonder about ethical obligations to inform patients if an AI tool was used for decision-making. “As of now, informed consent for AI might be required in surgery or in research. But outside of those contexts, if there’s an AI tool that’s FDA-approved, it’s possible to use it without informed consent,” says Collins.
Some clinicians are attempting to discuss the use of AI with their patients. “But it’s not always happening. And when it does, it’s a difficult conversation because the clinician is not going to be able to answer all the questions about the AI,” says Collins. This raises another ethical concern about the concept of AI “explainability.” When using a non-AI tool, such as a clinical decision support tool, the clinician is able to explain why the tool provided the output it did. For instance, the tool may have given a score because of the patient’s medical history and lab test results. With an AI system, the clinician may have no idea what its recommendation was based on. “You can have some AI systems that are much better at certain decisions than humans. But if it’s biased, then the human is not going to know that. So, it’s warranted for clinicians to have hesitation,” says Collins.
For ethicists to be able to assist with these cases, it would be helpful to know something about AI systems. Here are some things that can be helpful to ethicists, according to the authors:
- How to evaluate the quality of data sources for AI training;
- How AI systems are evaluated before being FDA-approved;
- The approval process used by the hospital for choosing an AI system;
- What types of problems AI may be able to address, and what the current limitations are for AI tools.
“We can’t expect the ethicist to be technical experts in AI systems. But some understanding of AI would allow the ethicist to have a conversation about an AI system,” says Collins.
Stacey Kusterbeck is an award-winning contributing author for Relias. She has more than 20 years of medical journalism experience and greatly enjoys keeping on top of constant changes in the healthcare field.
References
1. Huber K, DeCamp M, Alasmar A, Hamer M. The ethics of artificial intelligence-based screening for end-of-life care and palliative care. J Pain Symptom Manag. 2025;69(5):e425-e426.
2. Collins BX, Bhatia S, Fanning JB. Adapting clinical ethics consultations to address ethical issues of artificial intelligence. J Clin Ethics. 2025;36(2):167-183.
Artificial intelligence (AI) prognostic tools introduce ethical concerns about transparency, consent, bias, and explainability. Ethicists play a vital role in guiding responsible use, supporting clinicians, and preparing for challenging scenarios where AI intersects with patient care decisions.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content
Already have an account? Log in