By Stacey Kusterbeck
Participants in clinical trials often struggle to comprehend informed consent forms, raising questions on whether they are making truly informed decisions. Artificial intelligence (AI) tools are a potential solution to this longstanding ethical concern. “While the use does not seem to be widespread yet, we have seen interest and pilot projects for AI tools used in both the consent development and in the consent process,” says Lindsay McNair, MD, MPH, principal consultant at Equipoise Consulting. Another potential use of AI tools is chatbots to answer questions from potential study participants as they are reviewing consent forms.
“Informed consent is the cornerstone of ethical research, and we need to make sure that our interest in using new, interesting tools is not fundamentally changing the three essential components of information, comprehension, and voluntariness,” says McNair.
AI-generated informed consent forms outperformed human-written informed consent forms in readability without compromising completeness or accuracy, a recent study found.1 Compared to human-written informed consent forms, AI-generated informed consent forms more consistently provided clear, actionable next steps. “This crucial aspect is often missing in standard consent documents,” says Adrian Zai, MD, PhD, MPH, one of the study authors and chief research informatics officer at UMass Chan Medical School. However, the burden is on researchers to ensure that AI-generated documents uphold ethical and regulatory standards. Here are some ethical concerns regarding AI tools used for informed consent:
AI models are only as reliable as the information they are given. When AI tools are part of the consent process, the Institutional Review Board (IRB) should ask where the tool is drawing information from, says McNair. Most AI programs draw from extensive databases that may contain both accurate and inaccurate information. “The tool may provide misinformation because it can’t tell which is which,” says McNair. “Hallucinations” — a result in which the AI tool makes up a response and creates false information — also are a concern.
“The AI may produce consent documents with gaps or inaccuracies if a research protocol lacks clarity,” adds Zai. To prevent this, researchers must be sure that AI-generated informed consent forms clearly, correctly represent study risks, benefits, and procedures.
There is a need to balance readability with completeness. AI can effectively simplify language, but there is a risk that complex medical concepts may be oversimplified. Participants must receive all the details necessary to make an informed decision about their involvement in a study. “This is especially critical for populations with limited health literacy. Improving readability should not come at the cost of omitting essential study information,” says Zai.
AI models are trained on large datasets that may contain inherent biases. Consent documents may unintentionally emphasize or downplay certain aspects of the study. This can affect how participants perceive risks and benefits. “Researchers must carefully examine AI-generated content for subtle biases, ensuring that all information is presented in a neutral and balanced way,” says Zai.
There is continued need for human oversight. Overreliance on AI without human validation can result in misleading or inaccurate consent documents. “The final review of an AI-generated informed consent form should always involve clinical researchers, ethicists, and regulatory professionals to ensure the document meets the highest ethical and legal standards,” underscores Zai.
IRBs should ask these questions when reviewing study protocols involving AI-generated consent documents, says Zai:
- Is there a robust process to verify that all necessary study details (such as risks, benefits, and procedures) are captured correctly in the AI-generated document?
- What steps have been taken to mitigate bias in AI-generated informed consent forms? Was the AI system designed to accommodate diverse linguistic and cultural needs? “If a study involves non-English-speaking participants, IRBs must ensure that AI-generated translations are accurate and culturally appropriate,” says Zai.
- What safeguards are in place to prevent errors and ensure that the final content accurately reflects the research protocol?
- Will participants be informed that AI was used in creating their consent forms? Will they have the opportunity to ask clarifying questions?
- Will participants have access to human researchers who can address any concerns?
- Who is responsible for reviewing AI-generated informed consent forms before they are presented to participants? Is there a transparent process for identifying and correcting any discrepancies?
- Have researchers used participant feedback or comprehension testing to confirm that the AI-generated documents are easy to understand?
“While readability scores are essential, comprehension involves more than word difficulty and sentence length,” says Zai.
Reference
1. Shi Q, Luzuriaga K, Allison JJ, et al. Transforming informed consent generation using large language models: Mixed methods study. JMIR Med Inform. 2025;13:e68139.
Participants in clinical trials often struggle to comprehend informed consent forms, raising questions on whether they are making truly informed decisions. Artificial intelligence (AI) tools are a potential solution to this longstanding ethical concern. However, the burden is on researchers to ensure that AI-generated documents uphold ethical and regulatory standards.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content