Growing Ethical Concerns Surround AI Therapy Chatbots
December 1, 2025
By Stacey Kusterbeck
Artificial intelligence (AI) therapy chatbots are increasingly being used in psychotherapy.1 However, digital mental health tools raise significant ethical risks, according to a 2025 Hastings Center report.2 “We started thinking about this topic back in 2021 while working on some other articles on the ethics of AI in healthcare. We noticed that virtually no one had written about it. When ChatGPT launched in 2022, human-like chat became possible — and growing anecdotal evidence suggested people were using these tools as therapists,” says Amitabha Palmer, PhD, HEC-C, one of the report’s authors. Palmer is assistant professor of critical care medicine and director of education at the Center for Clinical Ethics in Cancer Care at MD Anderson Cancer Center.
There is a growing awareness of the need to consider the purported benefits of AI therapy bots from an ethical standpoint. “It’s not that these technologies are better than in-person therapy, but they may help to overcome barriers to access that might otherwise have been insurmountable,” acknowledges Palmer. For many people, mental health resources are out of reach — because they are too far away, are not covered by insurance, are too costly, or require taking time off work.
At first glance, AI therapy chatbots seem to overcome each of those barriers. However, decision-making on whether to provide a patient with a digital mental health app should involve a risk/benefit analysis, says Palmer. For example, the potential benefits of a therapy app may outweigh the risks for a cancer patient or a patient who lives along, because of a higher risk of depression. There also are technology-based risks to evaluate, such as whether the app connects a patient who expresses suicidal thoughts with a live person and whether the app uses an evidence-based method to address the patient’s condition.
When clinicians consider whether therapy chatbots should be used, it raises the question “Compared to what?” says Palmer. For many people, it is not a choice between in-person therapy and an AI bot — it is a choice between an AI bot vs. no mental health intervention at all. There also are broader societal ethical concerns. “A problem with many technologies that replace or displace human activities is that they may confer individual benefit, but not societal benefit. That is, they displace activities that nurture and reinforce certain values. When those activities are displaced, so are those values. We believe this may be the case with digital therapy chatbots,” says Palmer. “I think the role of clinical ethicists will primarily be shaping institutional policy to ensure responsible use of these types of tools.”
Existing AI ethics guidelines fail to capture the unique complexities of mental healthcare, according to some ethicists. “Psychiatry is not like cardiology or oncology. Its data are subjective, contextual, and, often, stigmatized. Models trained on these data can easily misfire — reinforcing diagnostic bias, overriding patient autonomy, or eroding clinical judgment,” says Andrea Putica, PhD, a teaching and research fellow at University of Melbourne.
Mental health “has its own nuances that may require unique solutions to the ethical challenges of AI,” says William J. Bosl, PhD, a professor in the School of Nursing and Health Professions and Data Institute at University of San Francisco and lecturer at Harvard Medical School. To address this issue, Putica, Bosl, and colleagues developed the Integrated Ethical Approach for Computational Psychiatry (IEACP).3 The framework offers clinicians and policymakers a structured process to implement AI applications for mental health uses.
“The aim was to create a procedural, domain-specific ethical framework that helps clinicians, developers, and policymakers navigate real-world dilemmas step-by-step. It moves ethical reasoning upstream, embedding it in the development and deployment phases instead of waiting for harm or controversy to force a reaction,” says Putica. The IEACP framework addresses these ethical issues:
Lack of transparency. Some models generate clinical recommendations through processes that remain fundamentally unexplainable, even to the people who built them. This “black box” problem undermines clinicians’ and patients’ ability to understand their care. “Transparency becomes especially tricky with machine learning,” says Juliet Edgcomb, MD, PhD, an assistant professor-in-residence of psychiatry at UCLA Health and member of the UCLA Semel Institute for Neuroscience.
Concerns about bias and justice. “Psychiatric datasets are overwhelmingly Western, white, and high-income. Deploying such models globally risks embedding inequity directly into diagnostic systems,” says Putica.
When algorithms are trained primarily on one population, the algorithm may perform more poorly in a different population or across different population subgroups, adds Edgcomb.
Lack of autonomy for patients with fluctuating capacity. For some patients, decision-making capacity can fluctuate, sometimes dramatically. AI systems might override patient preferences during vulnerable periods. “Complex algorithmic outputs can make truly informed consent difficult. This tension becomes acute when AI predicts suicide risk or psychosis relapse, and triggers suggested interventions that patients reject,” says Edgcomb.
Compromised scientific integrity because of commercial and translational pressure. “Rapid innovation often outpaces peer review, leading to premature clinical adoption of unvalidated algorithms,” explains Putica.
Expansion of privacy beyond traditional confidentiality. “We now face questions about how algorithms process data, how models train on sensitive psychiatric information, and how digital tools infer mental states from behavior patterns,” says Edgcomb. This creates new risks for data breaches and individuals getting unauthorized access to stigmatizing information. “These tools need to work alongside patient stories and experiences, which makes informed consent and transparency more complex than in other medical fields. Without clear ethical guardrails, we risk seeing AI reinforce existing biases, undermine patient autonomy, and widen gaps in care quality,” warns Edgcomb.
Ethicists “have to get out of the armchair and into the workflow,” says Putica. Ethicists can offer case-based educational sessions for clinicians and data scientists on how algorithmic bias, opacity, and autonomy issues play out in psychiatric contexts. “Join project teams early — as co-designers, not auditors,” advises Putica.
Importantly, ethicists should offer feedback when decisions are being made about algorithmic thresholds or data-sharing policies. “Get involved early in stakeholder mapping and engagement. We need ethicists ensuring that patients with lived experience, families, clinicians, and marginalized communities have seats at the table from day one, not just when problems emerge,” says Edgcomb.
Another unique ethical challenge with AI in psychiatry is that diagnoses are defined by observed behaviors, not by biological or even psychological causes, adds Bosl. In one case, a woman was misdiagnosed with schizophrenia based on family history and symptoms of confusion and erratic behavior.4 Eventually, a computed tomography scan revealed a tumor near the olfactory nerve that required surgery, not antipsychotic medicine or talk psychotherapy. In this case, the tumor was causing symptoms that looked like those that make up a behavioral definition of schizophrenia. Cases like this one reveal the limitations, and dangers, of AI tools in the psychiatry field. “AI systems train on available data, so they are training on behavioral assessments and are unable to ‘think’ outside the box about underlying neural dysfunction that hasn’t yet been explored,” says Bosl.
The IEACP framework addresses ethical challenges with “instrumental” AI systems — those that augment human capabilities while remaining under human supervision. “A paradigm shift toward agentic AI — autonomous systems capable of managing entire clinical workflows without human oversight — introduces deeper ethical, legal, and philosophical challenges,” says Bosl. As autonomy in decision-making blurs the boundaries between machine and clinician, trust and moral responsibility for errors become urgent ethical concerns. ”Will agentic systems become conscious and able to make decisions as moral agents? Many tech leaders believe this without question and are proceeding as if it’s a foregone conclusion. This assumes that we already know what human consciousness is, and that machines can attain it. It’s a deep philosophical question with important legal and ethical ramifications,” observes Bosl.
Bosl says the most important role for ethicists is education — of everyone involved in implementation and use of digital mental health tools. Clinicians need a clear understanding of how generative AI systems work — for example, that the tools are generating probabilities based on the data they were trained on. “If clinicians understand that, it can help them to use new systems as tools, while being wary of the information and recommendations, letting their own judgment evaluate AI output,” says Bosl.
Ethicists can help provide training to allow clinicians to do the following, says Edgcomb:
- interpret AI outputs through ethical lenses;
- recognize algorithmic bias;
- maintain therapeutic relationships when using computational tools; and
- communicate AI-generated insights to patients with varying cognitive capacities.
Ethicists also must be ready to advise and educate policymakers and leaders involved in implementation of newer autonomous systems. “The economic pressures for implementing efficient AI agents that can automate complicated workflows and, thus, cut costs for large healthcare organizations will be very strong. Limiting autonomy and, thus, moral agency, may be difficult for some executives to do. But it may be more important than ever. We will need another moral framework,” says Bosl.
Another group of researchers looked at patients’ reasons for preferring chatbots for certain healthcare tasks.5 Researchers interviewed 46 patients who used chatbots between 2022 and 2024. Some patients did so because they preferred to discuss sensitive topics, such as mental health, with a chatbot to avoid being judged by a doctor. One patient stated, “I might log onto [the chatbot] looking for a psychiatrist, and I may not be comfortable with my PCP (primary care provider) knowing that information.”
“This is really an ethical quandary. On the one hand, I like it that patients have a way to ask questions without fear of judgment. But on the other, I worry that it reflects a need for us clinicians to be even better at having those same conversations without judgment. It’s not easy, but we need to keep working at it so we don’t need chatbots to do it for us!” says Matthew DeCamp, MD, PhD, one of the study authors and director of the research ethics program at the Colorado Clinical and Translational Sciences Institute and associate professor in the Center for Bioethics and Humanities at University of Colorado Anschutz Medical Campus.
There is a pressing need for ethicists to be involved with decisions about digital tools being used at hospitals, AI or otherwise. “Ethicists can help support hospital policies about chatbots that are aligned with ethics and patients’ expectations. Key issues would include transparency in use, protecting the privacy of the conversation, and more,” offers DeCamp.
When an AI chatbot, whether used for general concerns or specifically for mental health, is being implemented at a hospital, the focus tends to be convenience, cost savings, and access. Ethicists are needed to ask patient-focused questions such as, “What can clinicians do to ensure their patients are not turning to chatbots as a default to avoid being judged?”
“Another tension we see is that some patients who have these interactions with chatbots don’t want them shared with their care team — but, of course, the care team would like to know. That’s another key policy question,” says DeCamp.
Stacey Kusterbeck is an award-winning contributing author for Relias. She has more than 20 years of medical journalism experience and greatly enjoys keeping on top of constant changes in the healthcare field.
References
1. Frances A. Warning: AI chatbots will soon dominate psychotherapy. Br J Psychiatry. 2025; Aug 20:1-5.
2. Palmer A, Schwan D. Digital mental health tools and AI therapy chatbots: A balanced approach to regulation. Hastings Cent Rep. 2025;55(3):15-29.
3. Putica A, Khanna R, Bosl W, et al. Ethical decision-making for AI in mental health: The Integrated Ethical Approach for Computational Psychiatry (IEACP) framework. Psychol Med. 2025;55:e213.
4. Leo RJ, DuBois RL. A case of olfactory groove meningioma misdiagnosed as schizophrenia. J Clin Psychiatry. 2016;77(1):67-68.
5. Dellavalle NS, Ellis JR, Moore AA, et al. What patients want from healthcare chatbots: Insights from a mixed-methods study. J Am Med Inform Assoc. 2025; Oct 6. doi: 10.1093/jamia/ocaf164. [Online ahead of print].