By Stacey Kusterbeck
During the informed consent process, study participants expect research staff to cover the risks and benefits of participating in the clinical trial. But there is growing awareness of research risks that could affect certain geographic, religious, or ethnic groups.1 “In some types of research, such as data-centric research, there may be limited risks to individual participants, but there may be significant risks to groups and third parties,” says Carolyn Riley Chapman, PhD, MS, lead investigator in the Division of Global Health Equity at Brigham and Women’s Hospital and a member of the faculty at Harvard Medical School.
Institutional review boards (IRBs) make sure that research studies adhere to policies and regulations. In doing so, IRBs act to protect the rights, safety, and welfare of human research participants. “This includes ensuring that the benefits to society and participants exceed risks to participants, that risks to participants are minimized, that participants are selected fairly, and that the informed consent process, if applicable, is robust and appropriate,” observes Chapman.
According to the federal regulations on research (often referred to as the Common Rule), IRBs should not consider possible long-range effects of applying knowledge gained in the research as among the research risks that fall within the purview of its responsibility. “In effect, this means that risks to groups or society-at-large are not evaluated by IRBs when reviewing research protocols. The federal regulations are also asymmetrical in the disclosure of benefits and risks in the informed consent process. Any benefits to subjects or to others are disclosed. But regarding risks, only risks to subjects must be disclosed,” says Chapman. Potential risks to groups or society-at-large do not have to be disclosed.
Similarly, when IRBs make determinations for a waiver of informed consent, the criteria for a waiver only include consideration of risks to research subjects, not risks to groups or communities. “The federal regulations assume that most research benefits will accrue in the future to society, while risks will be borne by the research participants. This is certainly true for early-stage interventional research. Yet, with evolving technology, the power, scope, and types of research have changed,” asserts Chapman.
If researchers consider the possibility of group harm when designing protocols, harm mitigations can be built into the study design. “AI (artificial intelligence) research that seeks to identify sexual orientation is a great example of research that could potentially cause group harm,” says Sara Meeder, director of Human Research Protections at Maimonides Medical Center. With this research, participants could be harmed because their pictures were used and subsequently identified (whether correctly or not) as being associated with an individual from the LGBTQIA+ community, which could be stigmatizing. “However, the research itself could potentially be used to out members of the LGBTQIA+ community, a group harm,” says Meeder. This is putting non-participants who are members of a particular group at risk.
“If this type of project came before my IRB, I would hope that the committee would ask questions about where the images used to train the algorithms came from — was it publicly available? Scraped from social media? Donated by participants?” asks Meeder. The IRB also should question the scientific merit of the study in comparison with the individual participant risks and possible group harms.
It is easy to identify potential group harm in some projects, while, in others, it is not as simple. The goal is to make sure the IRB is considering group harm and possible mitigation just as they would for any other type of harm in research. “Some people assume that the IRB is talking about disapproving research that could harm groups in some way. In reality, the IRB or ethics review board should be aiding the researchers in thinking about harm mitigation so researchers can conduct responsible, thoughtful research that has the possibility of improving the world. Risk mitigation calls for creativity — it’s no different with mitigation of group harm,” says Meeder.
The potential for group harm does not mean the research cannot be done. However, the researchers and review committees should work to design the research in a way that takes into account the wishes and fears of the group that might be harmed, says Meeder.
Although individual members might consent to their data being used in such research, the effects of the research, good and bad, can affect all members of a particular group, even though those people did not give consent, says Ryan Spellecy, PhD, Ursula von der Ruhr Professor of bioethics and director of the Human Research Protection Program at Medical College of Wisconsin. Researchers can proactively consult with ethicists about group harm concerns. “Ethicists are uniquely suited to consider the wide range of ethical issues associated with this research, whether it be potential benefits, privacy concerns, or bias,” says Spellecy.
If researchers work with members of the community, this allows people at risk for group harm to make decisions for themselves.2 “We can draw lessons from the community-engaged research literature, which strives to engage community members as research team members who can speak to any number of issues in the research, including group harms and how to prevent or mitigate them,” offers Spellecy. Community members can give input on the kinds of questions to ask, or not ask, to avoid group harms. “A famous example is the Havasupai Indian research study. In this study, researchers from Arizona State University were successfully sued for, among other things, causing emotional harm or distress. This might have been avoided had researchers worked with the community,” says Spellecy.3 Hospital-based ethicists also can help to address this issue at their institutions. “Ethicists can help investigators, clinicians, and patients better understand the use, promise, and — not to sound alarmist — peril of data-centric research,” says Spellecy.
Researchers who work with big health data — data scientists, computational biologists, and statistical geneticists — do not intend to do group harm. “There just aren’t tools — technical infrastructure, training, guidance, oversight, or regulations — to help them do right by communities. Health data regulations in the U.S. focus on the rights of individuals, which makes sense — they were written for a different era,” explains Megan Doerr, MS, LGC, a director at Seattle-based Sage Bionetworks.
This puts the obligation on researchers to proactively consider group harms when developing protocols for data-centric studies. “Without such consideration, we are risking the scientific efforts because potential participants may feel distrust, leading to lower recruitment of populations who may experience group harms, and, ultimately, findings that are not generalizable for everyone,” says Maya Sabatello, LLB, PhD, an associate professor of medical sciences at Columbia University. Conversely, considerations of group harms can build trust with communities that are underrepresented in research, improving the scientific endeavor. “My experience shows that researchers are increasingly aware of the risks of group harms and view themselves as responsible for reducing and avoiding such risks. They want to do the right thing and grapple with how to do so. The issue is thus ripe for consideration,” says Sabatello.
Health researchers and IRBs often think about risks mainly in terms of physical harm, but group harm typically involves risks of social harms, such as stigmatization. For example, many deaf individuals who use sign language do not view being deaf as a disability. “In my studies with this community, participants expressed concerns about their data being used in ways that aim to eliminate the community,” reports Sabatello.4 Deaf individuals also had concerns about societal harms if secondary data users describe them in ways that reinforce misperceptions about deaf people.
Researchers may not always have the awareness as to how their studies may harm relevant communities. “While IRB members should be considering these issues, how and what to review regarding group harms is not clearly specified by existing rules and standards,” says Sabatello. In general, IRBs are authorized to decide whether to approve studies. Theoretically, IRBs could decline to approve a study because of group harm. Yet, a previous study found that when IRB members have group-level concerns about a study protocol, such as equity issues, they often feel disempowered to require that researchers actively take steps to reduce the inequities and social risks.5 “Ultimately, IRB members seem to focus on requiring additional stipulations in the consent forms — even though we know that consent forms are already way too long and that many research participants do not understand the consent forms,” says Sabatello. IRBs should be asking these questions about group harm when reviewing study protocols, recommends Sabatello:
- What steps have the researchers taken to consider the possible group harms that may arise from the study?
- Has the relevant community been consulted in substantive ways to ensure that the proposed study is not harmful?
- Beyond the consent form, what would a participant from that community be reasonably expected to consider in terms of risks for group harms? (For example, a participant who joins an oncology study may not expect their data to be used for studies on intelligence quotient).
- What are the possible long-term effects of a study protocol? What steps have the researchers and research institutions taken to promote benefit sharing?
“IRBs need to be educated about possible group harms from a broader societal perspective and decline approvals and/or monitor implementation in the long run, as appropriate,” concludes Sabatello.
Stacey Kusterbeck is an award-winning contributing author for Relias. She has more than 20 years of medical journalism experience and greatly enjoys keeping on top of constant changes in the healthcare field.
References
1. Chapman CR, Quinn GP, Natri HM, et al. Consideration and disclosure of group risks in genomics and other data-centric research: Does the Common Rule need revision? Am J Bioeth. 2025;25(2):47-60.
2. Spellecy R, Nencka A. Addressing risk in data centric research via community engagement. Am J Bioeth. 2025;25(2):85-87.
3. Sterling RL. Genetic research among the Havasupai: A cautionary tale. AMA Journal of Ethics. Published February 2011. https://journalofethics.ama-assn.org/article/genetic-research-among-havasupai-cautionary-tale/2011-02
4. Garofalo DC, Rosenblum HA, Zhang Y, et al. Increasing inclusivity in precision medicine research: Views of deaf and hard of hearing individuals. Genet Med. 2022;24(3):712-721.
5. Sabatello M, Bakken S, Chung WK, et al. Return of polygenic risk scores in research: Stakeholders’ views on the eMERGE-IV study. HGG Adv. 2024;5(2):100281.
Data-centric research poses risks not only to individuals but also to identifiable groups. Experts urge institutional review boards and researchers to recognize potential group harms, implement community engagement, and ensure responsible data use to avoid stigmatization and social harm.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content
Already have an account? Log in