Some experts fear that AI agents, survey-taking bots created by large language models (LLMs), pose a major threat to the survey industry. The fear is that these AI agents can evade many current strategies used to detect fraudulent respondents, producing error-prone and biased survey results.
While Verasight has developed many methods to verify that respondents are humans, one of the most important strategies not only targets AI fraud but also helps ensure human respondents provide attentive and sincere responses. The strategy is simple: limit the number of surveys respondents are invited to take. Westwood refers to this as “throttling.”
The Concern: Fraudsters Could Monetize AI Agents by Having the Agents Take Massive Numbers of Surveys
The worst-case scenario for Westwood is that “LLMs could transform survey fraud from a labor-intensive/low-margin cottage industry into a potentially lucrative and scalable black market for fraudulent data.” To understand this concern, it is important to recognize that most of the survey industry incentivizes respondents to take as many surveys as possible. This happens in two ways.
The most prominent way organizations encourage taking a huge numbers of surveys is by relying on survey marketplaces or aggregators. These marketplaces/aggregators combine responses from hundreds of different survey response providers. Enns and Rothschild have shown these survey providers often then go to other providers, creating an opaque web of survey responses with no disclosure regarding how responses are collected or how responses are verified. As a result, these survey respondents are incentivized to take as many surveys as possible as quickly as possible, collecting a payment for each survey completed.
The second scenario when individuals can sometimes take an almost unlimited number of surveys is when, instead of outsourcing responses to marketplaces/aggregators, survey firms maintain their own panel of respondents. Sometimes these individuals are recruited through web advertisements as part of an opt-in panel. Other times, respondents are recruited through sophisticated probability-based sampling methods. With both strategies (though more so with probability-based sampling), these companies incur substantial costs recruiting individuals to join their panel to take surveys, creating an incentive for these companies to invite their panelists to take as many surveys as possible. This can lead organizations to invite respondents to numerous individual surveys. In other instances, respondents are routed from one survey to the next, allowing them to take survey after survey. Whether the survey is conducted from an opt-in panel or probability-based panel, it is important to ask organizations you work with how many surveys panelists are allowed to take and if panelists are ever routed from one survey to another. Further, even if you work with an organization that has its own probability-based or opt-in panel, it is crucial to ask if they ever utilize marketplaces/aggregators for respondents.
Solutions: Throttling to Break the AI Agent Incentive
The use of LLMs to create AI agents to fraudulently take surveys and generate what Westwood calls a potentially “lucrative and scalable black market for fraudulent data” depends on these agents being able to take a massive number of surveys. In other words, the AI fraud Westwood warns about would involve AI agents joining the opaque web that supplies responses to marketplaces/aggregators or AI agents joining panels that do not place sufficient limits on the number of surveys their panelists can take. For this reason, Westwood identifies throttling as one of the critical steps survey providers should implement.
Indeed, limiting the number of surveys that can be taken eliminates the ability to use AI agents at scale to maximize survey profits. If someone plans to deploy an AI agent (or agents) to monetize survey taking, they would be wasting their efforts on companies that significantly limit the number of surveys that can be taken. To increase the survey-taking profit potential, fraudsters will instead focus on organizations that place no limits on the number of surveys taken and instead route people (or route AI agents) from one survey to the next. Other strategies exist to detect and block AI agents, but throttling the number of surveys available is a simple and crucial approach.
Additional Benefits of Throttling: Encouraging Attentive and Sincere Human Responses
Verasight has throttled the number of surveys available since our founding—before AI agents were ever thought of in connection with surveys. The reason is simple. Asking respondents to take a high number of surveys or routing respondents from one survey to the next also introduces problems when respondents are human.
Anyone who has taken a survey knows that it gets harder to maintain attention as the survey gets longer. Now imagine taking two surveys back-to-back. How about three surveys, or five surveys, or 20 surveys? The implications for response quality are clear.
Taking back-to-back surveys can also introduce question order and priming effects—from one survey to another! Further, taking numerous surveys every day or every week can lead to a familiarity of often-surveyed topics that those in the general population do not have. In other words, even if respondents maintain attention and provide sincere responses, these responses may reflect experiences from the myriad of other surveys recently taken. It has always been the case that limiting how many surveys respondents can take improves response quality. This approach now holds benefits for disincentivizing fraudsters interested in monetizing AI agents.
What you should ask your survey provider:
By asking the following questions to potential survey providers, researchers and organizations collecting survey data can mitigate the chances of AI agents entering the data and maximize the accuracy and representativeness of the data they collect.
- What is the maximum number of surveys respondents can take in a week?
- Some organizations may count routing from survey to survey to survey as a single instance, so be sure to ask for clarification about routing and how the total number of surveys is calculated.
- Do you use any synthetic (i.e., non-human) survey response data?
- Instead of ensuring survey respondents are humans providing accurate and sincere responses, some companies are actually selling synthetic survey respondents. We strongly recommend against synthetic data, as our research consistently shows that surveys based on synthetic respondents lead to incorrect conclusions (see, for example, here, here, and here).
- Do you ever rely on marketplaces/aggregators to obtain survey respondents?
- Many survey firms, even those with their own probability-based or opt-in panels, turn to marketplaces/aggregators to get respondents, which they then resell to you as the end client. This means neither you nor the company you are contracting with have any direct oversight of how respondents were verified or how many surveys respondents took before your survey.
- If survey respondents are sampled directly, what mechanisms do you have in place to detect potential AI fraud and to maximize human inattentiveness?

.png)
