By Stacey Kusterbeck
Premature decisions on withdrawal of life-sustaining therapy (WLST) happen for many reasons. Sometimes, prognostication is inaccurate or overly pessimistic early after an injury. Time pressure and uncertainty are other factors. “Premature WLST risks a self-fulfilling prophecy, ending life support before a patient’s recovery potential is fully understood, which can inflate mortality and obscure the true prognosis,” says Ayham Alkhachroum, MD, MSc, associate professor of neurology at University of Miami Miller School of Medicine.
WLST accounts for a large share of deaths after severe traumatic brain injury (TBI), yet WLST rates vary widely among hospitals, observes Alkhachroum. Alkhachroum and colleagues wanted to see if a machine-learning model could replicate real-world WLST decisions. By doing so, the researchers hoped to learn more about what actually drives WLST decisions, beyond clinical severity alone.
The researchers created machine learning models to predict WLST based on variables available at different time points.1 The researchers based the models on data from 155,639 patients included in the American College of Surgeons Trauma Quality Improvement Project National Trauma Databank. Of 32,385 patients who underwent WLST, the median time of treatment withdrawal was 46.4 hours. The patient’s age, the highest emergency department Glasgow Coma Scale (GCS) score, and the facility’s WLST rate were the most important factors in predicting WLST.
Notably, institutional practice patterns were a strong independent determinant of WLST, regardless of the patient’s clinical condition. A hospital’s facility WLST rate (an indicator of local practice culture) remained a top predictor of WLST, alongside age and GCS. “In other words, where a patient is treated still meaningfully shapes the likelihood of WLST, independent of how sick they are,” says Alkhachroum.
Machine learning models are useful for quality improvement, bias auditing, and transparency. For example, clinicians can use the models to answer questions such as, “Are we overweighting age?” or, “Is our early withdrawal rate unusually high?” “Used this way, the models can support more ethical care by highlighting modifiable practice patterns and encouraging clinicians to be patient in these decisions early on before irreversible decisions [are made],” says Alkhachroum.
Impatience with prognostic uncertainty is a persistent problem for both clinicians and families. “The uncertainty in prognosis is significant in the first few days and even longer in these patients,” underscores Alkhachroum. Clinicians and families, uncomfortable with not knowing the true prognosis, may adopt a pessimistic bias and move too quickly toward withdrawal. Ethicists can highlight the uncertainty in prognostication in these patients and help everyone involved to understand the patient’s values, preferences, and advance directives. “Patients with TBI may continue to recover months and even years after their injury,” stresses Alkhachroum.
Models are helpful to understand practice patterns at a system level. However, models should not be used to predict WLST for individual cases, emphasizes Alkhachroum: “The models predict what is likely to happen, not what ought to happen. Using them to make WLST decisions would encode existing biases, including withdrawal culture, and risk less ethical care.”
AI is unlikely to be useful in affecting individual patient and family decisions about WLST, asserts May Hua, MD, MS, associate professor of anesthesiology (in epidemiology) at Columbia University Medical Center.
“A withdrawal of life-sustaining therapy happens either because the surrogate decision-maker has a clear understanding of a patient’s wishes to not want to be on prolonged LST or, more commonly, because it becomes clearer and clearer that there is little hope for survival,” says Hua. In this second scenario, the clinical team is focused on helping families come to an understanding of the prognosis and how the patient’s goals may not be attainable. “I don’t think AI would be all that helpful for either scenario. For example, in the ICU (intensive care unit), we have scoring systems that routinely predict someone’s risk of mortality at 80% to 90%. But that doesn’t really change whether a family wants to pursue withdrawal. I would have a hard time believing that families would be swayed by knowing that an AI suggested withdrawal as a course of action,” says Hua.
Some clinicians use the intracerebral hemorrhage (ICH) score in decision-making on treatment withdrawal. This is ethically concerning, according to Nina Massad, MD, an assistant professor of neurology and neurocritical care at the University of Miami Miller School of Medicine.
The ICH score originally was designed as a communication tool for providers for risk stratification, rather than as a bedside prognostic tool. However, the score is being used in ways that drive decision-making, including WLST. “Although the creators of the ICH score have explicitly discouraged its use to guide prognostication at the bedside, we continue to see this happen in clinical practice,” says Massad. Massad and colleagues analyzed data from 12,426 stroke patients and found that higher ICH scores were strongly associated with WLST, including early withdrawal decisions.2
“In the neuro ICU, this creates a real risk of a self-fulfilling prophecy, where the score does not simply predict the outcome but helps determine it,” says Massad. Clinicians also may tend to be less aggressive with medical or surgical intervention if there is a perception that the patient is inherently unlikely to do well. That less aggressive treatment can further shape outcomes.
“For clinicians, the key point is to resist the temptation to use the ICH score deterministically,” says Massad. A high ICH score should not be the basis for early withdrawal of (or scaling back on) aggressive medical or surgical care, Massad explains. Ethicists can do these things, Massad offers:
- Help institutions to address how prognostication scores are being used in practice. “That includes encouraging prognostic pauses before WLST, auditing patterns of care, and making sure institutional culture supports the use of tools for provider discussion rather than as directors for life-support decisions,” says Massad.
- Reinforce guideline-based practices that discourage early limitation of care. American Heart Association/American Stroke Association guidelines recommend against initiating WLST within at least the first 48 hours after ICH presentation, since the early course can be highly dynamic and recovery potential often is unclear.3 “This ensures that patients are given adequate time for a more reliable assessment before irreversible decisions are made,” says Massad.
- Discourage early prognostication. “Prognosis is probabilistic, not absolute. Decisions should always be framed in the context of patients’ goals or values,” emphasizes Massad.
Ethicists are seeing more cases involving conflict over the timing of WLST, according to Stuart G. Finder, PhD, director of the Center for Healthcare Ethics at Cedars-Sinai in Los Angeles. ”The American public has been exposed, even if they have not been explicitly self-aware, to the inherent uncertainty that partially defines clinical medicine. As a result, there has been a slow shift away from an unquestioned trust in the healthcare system — a kind of ironic outcome of making more information available,” says Finder.
In this context, when people hear stories of recovery (whether partial or full) after major neurologic injuries, they are unlikely to question whether the report is accurate or not. “Because this is what everyone wants as an outcome, the pump is primed, so to speak, to further doubt the trustworthiness of those healthcare professionals who give bad news,” says Finder.
In Finder’s experience, matters are made worse when providers introduce the idea of WLST using language such as “time to withdraw care.” “This further amplifies that providers are not to be trusted because they don’t ‘care.’ It is important that adequate time be allotted, that families not be pressured or feel they are pressured,” says Finder.
Instead, ethicists can encourage providers to first offer a medical description of what is going on with the patient, and then ask families to tell them about the patient. Finder suggests asking these questions: What kinds of things matter to the patient? How much burden of intervention would the patient be willing to endure in the hope of reaching some minimally acceptable outcome? How much of a chance of success would the patient feel is acceptable? “When this isn’t the approach taken, and instead clinicians push for limiting or withdrawing interventions, that’s when families resist — and use the stories they’ve heard of recoveries to undercut any hope for trust. Once that happens, that’s often when ethics is requested,” says Finder.
Stacey Kusterbeck is an award-winning contributing author for Relias. She has more than 20 years of medical journalism experience and greatly enjoys keeping on top of constant changes in the healthcare field.
References
1. Cobler-Lichter M, Delamater JM, Teixeira FJP, et al. Machine learning models to predict withdrawal of life-sustaining therapy in patients with severe traumatic brain injury. Neurology. 2025;105(9):e214249.
2. Massad N, Zhou L, Manolovitz B, et al. Association of the ICH score with withdrawal of life-sustaining treatment over a 10-year period. Ann Clin Transl Neurol. 2025;12(10):1992-2001.
3. Greenberg SM, Ziai WC, Cordonnier C, et al. 2022 Guideline for the Management of Patients with Spontaneous Intracerebral Hemorrhage: A guideline from the American Heart Association/American Stroke Association. Stroke. 2022;53(7):e282-e361.