By Greg Freeman
The increasing use of artificial intelligence (AI) is posing a new type of threat to the security of patient data and financial information, says Jordan T. Cohen, JD, partner with the Akerman law firm in New York City.
There is the potential for breaches caused by AI-specific attacks, such as “prompt injection,” in which an attacker uses specially crafted prompts that trick the AI system into revealing confidential data.
“This has happened outside of healthcare, and while I’m not aware of public reports of prompt injection being used to nefariously extract healthcare data, the increasing adoption of AI tools in healthcare means we’re likely to see these types of attacks in the future,” he says. “There are other technical attack vectors, including attacks on data used by healthcare entities, such as ‘data poisoning,’ which involves malicious contamination of training data to corrupt or contaminate AI systems.”
Data governance issues also can cause AI-related breaches, Cohen says. The risk is particularly acute for the healthcare industry, given participants’ extensive reliance on new technologies that often are provided by outside vendors, he says. These new technologies are attractive to healthcare providers and staff, which may input HIPAA-regulated protected health information (PHI) into AI systems without the proper controls in place, he says.
Lack of vendor due diligence, weak contracts — like those with no clauses for security controls and audits — and failure to enforce HIPAA Business Associate Agreements (BAAs) also can contribute to governance-related breaches, Cohen says.
“None of these issues are unique to AI, but with more healthcare entities implementing AI tools, vendor security-related breaches, including those related to large AI datasets, are likely to increase,” he says.
The overlap of technical and data governance issues was illustrated in what is currently the largest publicly reported healthcare breach involving AI. In May, an information technology (IT) service management firm offering AI-powered tools inadvertently exposed a database containing PHI of 483,000 patients, Cohen says. The database, which was publicly accessible in 2024 for more than a month, included names, Social Security numbers, medical record numbers, diagnoses, and other identifiable information.
“While the IT company did not have evidence that the data had been misused, it has sent breach notification letters to affected individuals,” he says. “The incident illustrates how a vendor’s cloud misconfiguration or security lapse directly becomes the healthcare provider’s breach.”
The incident was a reportable breach that will certainly siphon a significant amount of time and money from the company and its client, he says. In addition to the costs of attorneys and forensic firms, the parties will face government investigations at the federal level, and likely the state level as well, both of which could result in financial penalties, Cohen says, and there are now at least two proposed class action lawsuits against the IT firm.
“In addition to the financial costs, these types of breaches can cause reputational damage and an erosion of patient trust. Patients may be hesitant to trust, say, an AI-powered scribe in its exam room if patients are worried that the data could be leaked or stolen,” Cohen says. “There is also the potential for AI data breaches and attacks to lead to service disruptions, particularly if the AI systems are deeply embedded in the day-to-day operations of the healthcare provider. We also shouldn’t forget about the damage that can be caused to patients affected by the breach, such as identity theft and extortion, especially if there is sensitive data at issue.”
Healthcare entities can take a number of preventive measures, many of which apply to AI and non-AI systems alike, Cohen says. These include robust data governance policies and procedures, as well as vendor risk management practices, which should involve conducting due diligence on the security posture of the vendor, a strong HIPAA BAA if they are a business associate, and ongoing monitoring, he says.
“Multi-factor authentication of systems involving AI is another practice that can thwart attacks. If healthcare organizations are building AI models or platforms, they should ensure that they are designed with privacy and security during the design process,” Cohen advises. “Like any system involving PHI, the system should be tested and subject to a rigorous risk analysis.”
Source
- Jordan T. Cohen, JD, Partner, Akerman, New York City. Telephone: (212) 259-8754. Email: [email protected].
Greg Freeman has worked with Relias Media and its predecessor companies since 1989, moving from assistant staff writer to executive editor before becoming a freelance writer. He has been the editor of Healthcare Risk Management since 1992 and provides research and content for other Relias Media products. In addition to his work with Relias Media, Greg provides other freelance writing services and is the author of seven narrative nonfiction books on wartime experiences and other historical events.
The increasing use of artificial intelligence is posing a new type of threat to the security of patient data and financial information.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content