By Stacey Kusterbeck
When researchers recruiting for a clinical trial received more than 300 responses from interested individuals in only a few hours, it was highly encouraging — at first. “Only after we started connecting with focus groups did we notice red flags,” recalls Leah Rand, DPhil, lead author of the study and a research scientist with the Program on Regulation, Therapeutics, and Law at Brigham and Women’s Hospital.
Rand and colleagues had offered $100 to attend a 60- to 90-minute focus group about the decision-making process older adults used when deciding whether to take anticoagulants.1 The researchers had reached out to a patient advocacy organization with the hope the organization could share it with their members through a newsletter or email listserv. Instead, the organization posted it on Facebook.
“Our recruitment got taken over by people who just wanted the compensation,” reports Rand. The focus groups were held on Zoom to allow for more diverse participation in different regions and allow a greater variety of people to attend than those who have the means to meet at a specified location during the day. Many people in the focus groups had little or nothing to say or made comments that were off topic. Some people appeared to attend more than one focus group, by logging onto Zoom calls from the same computer but using different identities. Some people appeared to be in the same room, based on the identical distinctive wallpaper behind them. In another instance, two participants with their cameras off responded to each other’s names, suggesting that these were aliases. One participant joined a second focus group with the same account name from the first focus group and then quickly changed to a different name.
The researchers did not use any of the data from the suspect participants in the study, but did compensate them as promised. Rand and colleagues compared the responses of valid participants and impostors, and demonstrated that the fraudulent responses can meaningfully change findings of qualitative studies.2 For instance, the two groups had very different takes on the value of direct-to-consumer advertising. “If you were to come up with a new research question or policy proposal based on the study, you would have conflicting accounts of the direct-to-consumer ads,” says Rand.
Social media allows researchers to include people who normally do not participate in research. “Social media can help to recruit — at lower cost and greater convenience — a diverse pool of participants that might be otherwise difficult to reach,” says Jonathan Darrow, SJD, another of the study authors and a former professor at Harvard Medical School.
However, fraud is a persistent and pervasive problem, according to multiple recent studies.3-9 “Researchers should be wary of using social media, where it is relatively easy for fraudsters located anywhere in the world to impersonate someone else,” says Darrow. For researchers recruiting online for the first time, the sheer number of fraudsters and the creative methods used to obtain compensation can be overwhelming. “It takes more time and effort from researchers to confirm that these are people who actually meet our study eligibility criteria,” says Rand. Some effective approaches to minimize the risk of fraud:
- Schedule a short screening interview with each individual applicant. During that call, the researcher can ask questions that only those meeting eligibility criteria would be likely to know, says Darrow. For example, fraudsters impersonating patients with a particular condition are unlikely to know how many times per day or week a particular treatment is administered or the key warnings a doctor would be likely to mention (such as being sure to take a pill with food or to not mix it with alcohol).
- Check the applicant’s social media accounts. Social media accounts sometimes publicly display the year of a member’s joining or the number of a member’s contacts. “Very recent account creation, combined with very small numbers of contacts, is a red flag for fraud,” says Darrow.
- Ask participants to send a photo of the pill bottle with the name that matches the one they are using, for studies involving prescription drugs. This requires someone to send a photo through a smartphone. “It maybe feels less private. But it’s a way to confirm that the person exists and they take the drug that they say they take,” says Rand.
- For remote focus groups, state upfront that participants must have their cameras on to be compensated. “It’s a low barrier, but you know that each box is associated with a person. You can identify someone who returns to another focus group to get additional compensation,” says Rand.
These measures all create barriers, however minor they may seem, to participation. “It’s a tradeoff. Maybe some of the people you want to hear from most are not going to respond. But you are confirming that these are people who actually meet the study eligibility criteria,” says Rand.
Researchers should consider their own biases when evaluating whether someone is legitimate. “Some of the things that make us suspect this person isn’t who they say they are can be subjective judgments about the quality of someone’s speech or insights. We might have preconceptions about how we expect them to describe their experiences. We shouldn’t be so quick to dismiss something that doesn’t fit into that,” cautions Rand.
In the disability research community, fraud prevention strategies ended up excluding the very people disability research is meant to serve. “Common tools like CAPTCHAs, bot-detection scores, and response-time algorithms are blunt instruments. They catch some fraud but often flag legitimate participants too,” explains Kelsey S. Goddard, PhD, an assistant research professor at the Institute for Health and Disability Policy Studies at The University of Kansas. People using assistive technology might have slower or more uniform response patterns that mimic bots, or might be unable to complete visual or audio CAPTCHAs.
Disability researchers rely on social media recruitment to target hard-to-reach or low-incidence populations, such as people with rare conditions, multiple marginalized identities, or those disconnected from formal service systems. “These platforms are often the most direct way to reach people who may not be plugged into academic or organizational networks,” says Goddard.
Unfortunately, any study offering even small incentives (such as $5 or $10 gift cards) is vulnerable. “We’ve seen bots and scammers flood studies within hours of a flyer being posted,” reports Goddard. In some cases, hundreds of fake responses come in, overwhelming legitimate data. Some scammers impersonate disabled people or caregivers, fabricating convincing but inaccurate narratives to claim the incentive. “This is particularly dangerous in small-sample studies, where even a few bad responses can skew findings dramatically,” says Goddard.
Some researchers have to throw out large chunks of data or their entire dataset because they could not verify which responses were real. Others have stopped social media recruitment entirely because of the time it takes to sort through all the fraudulent responses. “But abandoning social media means risking more homogeneous samples, which undermines the equity goals of disability research,” says Goddard.
Without social media, researchers often fall back on institutional recruitment channels, such as hospitals, universities, clinics, service providers, or advocacy organizations. These sources tend to reach participants who already are engaged with formal systems of care, who are insured and are more educated. In a recent paper, Goddard and colleagues argued that fraud prevention strategies that do not come at the cost of accessibility and inclusivity are needed.10 The authors suggest institutional review boards (IRBs) use these approaches to ensure that disability research is both ethical and inclusive:
- IRBs should consider how proposed fraud detection methods might inadvertently exclude legitimate participants, especially those using assistive technologies or living with cognitive or sensory disabilities. “These tools, while well-intentioned, can create barriers that disproportionately affect the very populations the research aims to include,” says Goddard. Identity verification, response-time filters, or CAPTCHAs do reassure IRBs that the research team is taking fraud seriously, especially in an era of increasing scrutiny around data integrity. But these tools often were not designed with accessibility in mind. Cognitive or sensory disabilities may affect how quickly or accurately someone can complete tasks such as CAPTCHA tests or multi-step logic checks. Some identity verification methods (such as requiring government ID or requiring use of webcams) can raise privacy concerns and deter legitimate respondents. “Yes, the best fraud detection tools, like one-on-one screening conversations or multi-stage verification, are labor-intensive. But at least they allow researchers to apply human judgment. The challenge is that IRBs rarely balance the accessibility tradeoffs of digital fraud tools with the risk-benefit analysis they typically apply to physical risks in research,” argues Goddard.
- IRBs should evaluate how incentives are handled. Some IRBs require that participant compensation be disclosed in recruitment materials. “This is often framed as part of ethical transparency — making sure that people understand the time commitment and what they’ll receive in return,” says Goddard. However, requiring researchers to prominently advertise gift cards or payments in recruitment materials is an open invitation to fraudsters. “It’s important to ask whether researchers are using layered verification strategies, such as follow-up questions or tiered incentives, that protect against scams while still respecting participant privacy and autonomy,” says Goddard.
When considering whether the study will adequately protect participants’ data privacy, IRBs also should consider how accessibility and anonymity can open the door to exploitation. If surveys are anonymous, for instance, there is no way to verify if someone is submitting multiple entries to get multiple gift cards. If participation is entirely remote and accessible to anyone with a link, scammers can use bots, fake accounts, or repurpose artificial intelligence-generated content to pose as legitimate respondents.
“When researchers avoid burdensome verification steps to reduce barriers, they often open the door to bad actors exploiting that trust. This doesn’t mean accessibility or anonymity should be abandoned. It just means that protocols need to strike a better balance: enough verification to deter fraud, without creating insurmountable barriers for real participants,” concludes Goddard.
Stacey Kusterbeck is an award-winning contributing author for Relias. She has more than 20 years of medical journalism experience and greatly enjoys keeping on top of constant changes in the healthcare field.
References
1. Rand LZG, McGraw S, Wang J, et al. Patient perspectives on evidence supporting drug safety and effectiveness: “What does it mean for me?” J Am Geriatr Soc. 2024;72(9):2874-2877.
2. Rand LZ, McGraw S, Wang J, et al. Impostor syndrome: Fraudulent participants in qualitative research can skew results. J Law Med Ethics. 2025; Jul 7:1-6. doi: 10.1017/jme.2025.10123. [Online ahead of print].
3. Nesoff ED, Palamar JJ, Li Q, et al. Challenging the continued usefulness of social media recruitment for surveys of hidden populations of people who use opioids. J Med Internet Res. 2025;27:e63687.
4. Mayer C, Tryon R, Ricks S, et al. Preventing fraudulent research participation: Methodological strategies and ethical impacts. J Genet Couns. 2025;34(3):e70048.
5. MacDonald KV, Nguyen GC, Sewitch MJ, Marshall DA. Identifying and managing fraudulent respondents in online stated preferences surveys: A case example from best-worst scaling in health preferences research. Patient. 2025;18(4):373-390.
6. Pageau LM, Ling J. Improving data credibility in online recruitment: Signs and strategies for detecting fraudulent participants when using ResearchMatch. Contemp Clin Trials. 2025;154:107925.
7. Ng WZ, Erdembileg S, Liu JC, et al. Increasing rigour in online health surveys through the reduction of fraudulent data. J Med Internet Res. 2025; Jun 26. doi: 10.2196/68092. [Online ahead of print].
8. Comachio J, Poulsen A, Bamgboje-Ayodele A, et al. Identifying and counteracting fraudulent responses in online recruitment for health research: A scoping review. BMJ Evid Based Med. 2025;30(3):173-182.
9. Mizrach HR, Markwart M, Rosen RL, et al. Reddit for research recruitment? Social media as a novel clinical trial recruitment tool for adolescent and young adult (AYA) cancer survivors. J Cancer Surviv. 2024; Dec 5. doi: 10.1007/s11764-024-01719-8. [Online ahead of print].
10. Goddard KS, Hall JP, Kurth NK. Accessible, not exploitable: Navigating fraud prevention in disability research. Disabil Health J. 2025; Apr 29:101843. doi: 10.1016/j.dhjo.2025.101843. [Online ahead of print].
Fraudulent participants increasingly compromise research recruitment via social media, threatening data integrity. Although verification methods can deter scams, they often create accessibility barriers, especially in disability research. Ethicists and institutional review boards must balance inclusivity with fraud prevention strategies.
You have reached your article limit for the month. Subscribe now to access this article plus other member-only content.
- Award-winning Medical Content
- Latest Advances & Development in Medicine
- Unbiased Content
Already have an account? Log in