By Greg Freeman
Risk management concerns were raised as soon as healthcare providers started using artificial intelligence (AI), and the fears are keeping pace with the increasing presence of the technology.
Regulators are starting to address some of the possible problems with AI in healthcare, notes Wendell J. Bartnick, JD, partner with the Reed Smith law firm in Houston. The California attorney general recently issued an advisory to give healthcare entities guidance on how the office will address AI issues, and Bartnick says it makes points that will be useful for risk managers in all states. (The advisory is available online at https://bit.ly/42OPpNS.)
“They reminded healthcare providers that only doctors can practice medicine, so they aren’t allowed to basically delegate decision-making to AI. No matter how good it is at this point, humans are still better, at least in the eyes of the AG (attorney general) and most regulators, so they would expect that the doctor is the one who makes the decisions, even if they’re using AI to help provide them with information,” Bartnick says. “Also, health plans cannot deny, delay, or modify care based on AI assessments of medical necessity. A human, again, needs to be involved in making those decisions of whether to deny, delay, or modify care or coverage.”
The California AG also reiterated that providers are prohibited from unlawfully discriminating when providing care and that AI could lead them to that violation. Bartnick explains that the AG is concerned that AI could inadvertently have a bias against a certain demographic because the input data was insufficient.
The advice from the AI may be valid for most groups, but it cannot be used if it may be invalid for other groups, the AG cautioned.
“That’s just not necessarily that realistic at this point in time. If we know that it does a really good job of diagnosing cancer with respect to Black people but it does really poor with respect to diagnosing cancer with white people, we can’t use that tool because of a disproportionate negative impact on people that are white?” Bartnick says. “I don’t think that is the outcome that anyone wants. This is the one that I think is maybe the most worrisome in terms of stunting the use of potentially very helpful AI based technologies.”
AI has great promise in healthcare, particularly with areas such as radiology scans, back office, supply chain management, and staffing, says Barbara Bennett, JD, partner with the Frost Brown Todd law firm in Nashville, TN. There are some issues at the point of care, such as when AI prompts a provider to prescribe a certain drug or treatment, she notes.
“I think you have to be very careful there, and we can see that from the opioid history issues, which started that way,” she says. “I think the biggest concerns are privacy, security, and error, but I don’t think that providers and payers and tech companies that provide services to the healthcare industry should stop because of those concerns. I do think good governance and risk management are the way to go.”
Good governance is simply risk management that starts at the board level, Bennett says, and risk managers can help guide board members to the right decisions about AI.
“There is confusion right now and a little bit of chaos because it seems overwhelming. There are so many risks, it is so new, that I think there’s a little bit of being overwhelmed and (confused) about exactly what to do and how to do it,” she says. “In part, I think it is because of the whole funding issue, because tech companies and some larger entities can have the funding to set up governance, but people in healthcare are worried about funding care, their staff, and finding enough providers.”
The message to the board should be that, even though AI is new technology, the tried and true risk management strategies still apply, Bennett says.
“I think what overwhelms them is the black box, opaqueness, and fear associated with the technology, but managing the risk of the technology is a very classic risk management framework that has been around forever,” Bennett says. “I mean, risk management as a thing, a principle, is very old, and those steps do not change. It’s to gather the information, identify the risks, quantify those risks, and take the top five, 10, 12, and use risk management tools to decide how you’re going to manage them. That’s the way to go and think about it, instead of being overwhelmed by the technology itself.”
Sources
- Wendell J. Bartnick, JD, Partner, Reed Smith, Houston. Telephone: (713) 469-3838. Email: [email protected].
- Barbara Bennett, JD, Partner, Frost Brown Todd, Nashville, TN. Telephone: (615) 251-5577. Email: [email protected].
Greg Freeman has worked with Relias Media and its predecessor companies since 1989, moving from assistant staff writer to executive editor before becoming a freelance writer. He has been the editor of Healthcare Risk Management since 1992 and provides research and content for other Relias Media products. In addition to his work with Relias Media, Greg provides other freelance writing services and is the author of seven narrative nonfiction books on wartime experiences and other historical events.
Risk management concerns were raised as soon as healthcare providers started using artificial intelligence, and the fears are keeping pace with the increasing presence of the technology.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content