By Stacey Kusterbeck
Physicians are optimistic about how artificial intelligence (AI) tools will affect psychiatric medicine in the long-term, a recent study found.1 Researchers interviewed 42 physicians (21 psychiatrists and 21 family medicine practitioners), based on case scenarios where AI tools were used to evaluate, diagnose, or treat psychiatric conditions. Medical Ethics Advisor (MEA) spoke with Richard Sharp, PhD, one of the study authors, about the study findings and its ethical implications. Sharp is a professor of biomedical ethics and medicine and co-director of the Biomedical Ethics Research Program at Mayo Clinic College of Medicine and Science.
MEA: What insights were gleaned on how physicians viewed AI tools in psychiatric medicine?
Sharp: Because these were one-on-one interviews, that allowed us to examine physician perspectives in relation to clinical scenarios that we had developed with colleagues in behavioral medicine and psychiatry that were very close to real-world situations in which they find themselves every day. I was pleased at the level of depth we got from their responses.
Physicians, overall, were optimistic and saw a lot of promise with AI. But they didn’t feel that these tools were quite ready for widespread use. Some of their concerns had to do with the validation of the AI tools that would be used to guide their practices. That was probably the biggest concern.
Much like we see with other areas in technology in medicine, physicians were a little bit cautious, because they weren’t yet familiar with these tools on a personal level. Physicians wanted to see the data that would support that these tools would be safe to use and effective in guiding care. They wanted to know more about how those algorithms would make recommendations to guide care.
In many ways, what physicians were most concerned about was whether those tools could function largely in an independent manner without a human agent being involved in checking their work.
More surprising was the nuance with their approach to concerns related to bioethics with regard to their engagement with the technology. For example, some physicians were concerned with situations where the AI was making one recommendation and their medical judgment was pointing in a different direction. How would they present that conflict to a patient they were caring for? And how would they, at a personal level, manage that, as well?
If we were to do a study like this again, we would encounter physicians who had a lot more personal experience with AI tools. Now, you’d likely see physicians who’ve gotten those details and have actually made personal, individual-level decisions on whether they want to implement those tools in their work or not.
We are hoping to do studies, and I’m sure a lot of other groups are hoping to do those as well, to look at the actual real-world experiences that physicians have with these tools and whether or not the potential concerns are actually evident.
MEA: What specific benefits did physicians expect to see with AI tools in psychiatric medicine?
Sharp: Physicians were optimistic about the potential value of these tools to eliminate some of the administrative burden that is such a big problem in this area of medicine. In particular, they thought AI would be helpful in writing progress notes for patient encounters. That’s certainly one of the areas we see widespread concern about from physicians in a variety of different practices. But in psychiatry and behavioral medicine in particular, physicians do a lot of charting of patient experiences and documentation of patient histories.
Physicians stressed to us that a lot would depend on how these tools were implemented. For example, if they were implemented in a way that physicians would have to go through a series of menus in an EHR (electronic health record) or would have to go back and edit a progress note generated by AI because of extensive inaccuracies, that wouldn’t be helpful at all. Much would depend on the details — that was what we took away.
Physicians also stressed that, in some ways, these tools might be helpful as almost a type of triage. The tools could ensure that easy cases are managed before they route for referral to a psychiatrist for fuller evaluation, and that the kinds of cases that require the unique expertise of a clinical psychiatrist would be appropriately referred over. The tools could actually be helpful in terms of designing referral mechanisms, so they don’t end up seeing the patients [who] don’t have the type of clinical complexity that would really require a referral to a psychiatrist.
Physicians were optimistic in general about AI helping them with their day-to-day activities. But at that point in time, they were perhaps still waiting for additional details regarding the evidence in support of these tools, before they were ready to integrate them into their own clinical activities.
MEA: Is there anything unique about psychiatry that will affect the ethical use of AI?
Sharp: We’ve done a variety of studies now that have looked at physician as well as patient perspectives on healthcare AI, both in psychiatry and outside of that. It’s interesting that there is a convergence of concerns across patients and physicians. Both are concerned about how AI tools may impact the patient encounter or may result in patients not being candid about their home environment. Physicians are concerned about not being informed by all of the information that a patient may have available.
With other types of technologies, you might see physicians concerned about one thing, patients concerned about another thing, and hospital administrators concerned about another thing altogether. In that situation, bioethicists are seeking to bring forward patients’ concerns because maybe we are worried that they won’t be noticed until too late, only after implementation.
But here, we are seeing that physicians and patients are more or less concerned about the same things. What you see in this space is that most of the stakeholders involved seem to share the same concerns. They are concerned about the validity of the algorithms, about unexpected outputs that might come from those algorithms, and the need to have some sort of human agent in the loop to make sure that odd recommendations aren’t being acted on without there being that check in place.
MEA: What insights can be gleaned from previous technology implementations to ensure ethical use of AI tools?
Sharp: When EHRs were first rolled out in a lot of clinical practices, I don’t think we were anticipating that one of the significant impacts would be patients thinking that physicians were less attentive. Because of the physical spaces that were set up in hospitals and clinics, you would end up in a situation where a physician talking to a patient would have to turn sideways to enter information into the computer, avoiding eye contact. As a result, new technologies flowed behind that involved placing monitors inside of glass panels to make it easier for physicians to maintain a conversational engagement with patients. That’s a great example of something that no one was really thinking about at the time EHRs were being rolled out — how would that really look in the context of an examination room?
And here, what we heard were things that were parallel to that. Physicians were excited about the promise of AI. But they asked, “What would that look like in practice?” And I can’t help but wonder if some of that ambivalence comes from experience with other IT (information technology) in the past. I think that’s probably informing what we heard.
MEA: What role can bioethicists play in helping to ensure ethical implementation of AI at their institutions?
Sharp: When a new technology is being introduced, folks are going to have limited understanding of what the technology may involve. They may be wondering, “Are my concerns about this emerging clinical tool consistent with others? How do my peers think about this? How do my patients think about this?” That’s a place where bioethics has a lot to add, especially empirical bioethics.
Bioethicists can help to characterize what those concerns consist of, so they get brought into the light and are able to be factored into decisions about future applications of those technologies. By raising awareness of what the concerns are, informed decisions can be made about whether to adopt AI or how best to adopt AI.
That’s where I really feel that bioethicists can play a significant role. We can help our colleagues in other parts of healthcare to think more expansively about the full range of both benefits and potential limitations associated with these technologies. We are still in an early stage, where the specific applications are unclear. And how they are deployed within the context of health systems is not yet entirely clear. So, bioethicists can help people to think about different possible futures, and how some of those might be more responsive to patient needs and others might be less responsive to those needs.
That also mirrors what we are seeing in the field of bioethics. Bioethicists typically have gotten involved in situations where someone failed to anticipate a significant harm or a patient rights consideration. And after the fact, bioethicists identify strategies for mitigating that harm. Now, especially at academic medical centers, bioethicists help to anticipate those issues before they actually occur.
In the field of bioethics, that’s a change that happened over the last two or three decades. More often than not, we are integral components in the teams involved in making decisions about adopting these AI tools. And that’s really very different from where the field was 20 years ago.
Reference
1. Stroud AM, Curtis SH, Weir IB, et al. Physician perspectives on the potential benefits and risks of applying artificial intelligence in psychiatric medicine: Qualitative study. JMIR Ment Health. 2025;12:e64414.
Physicians are optimistic about how artificial intelligence tools will affect psychiatric medicine in the long-term, a recent study found.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content