A recent article published in the Proceedings of the National Academy of Sciences raises serious concerns about the future of online surveys. In the article, Sean Westwood shows that AI agents can be designed to take online surveys and are virtually undetectable. The prospect of researchers not knowing if humans or AI agents are taking their surveys is especially concerning, because AI agents do not provide the same responses to substantive questions as humans do. In other words, anyone using online surveys could potentially obtain biased results from AI agents and have no way to detect the bias.
Westwood’s Suggestions
Fortunately, a set of solutions exist. Westwood identifies five suggestions to mitigate against AI agent concerns. We list those here, and we outline Verasight’s specific approach to each.
1.) Ongoing Panelist Validation: Verasight maintains a panel of verified survey respondents. This allows us to evaluate responses over time (not within a single survey but over weeks and months), flagging and removing any inconsistent behavior. This is a fundamentally different approach from the many data aggregators and marketplaces, who outsource data collection and cannot monitor respondent behavior over time.
2.) Throttling Mechanisms: Verasight never routes respondents from survey to survey and limits how many surveys respondents can take. This approach improves survey response quality and removes the incentive for trying to take as many surveys as quickly as possible.
3.) Panelist Professionalism: Due to the above-mentioned throttling mechanism, Verasight panelists take fewer surveys than other existing online platforms, ensuring that Verasight respondents are more representative of the general public.
4.) Panelist Quality Checks: In every survey, Verasight obtains response quality data. As noted above, this information is tracked across surveys, providing the most comprehensive picture policy of panelists quality.
5.) Location Checks: Verasight is able to verify whether or not panelists start the survey from the location the account is registered and can confirm that a VPN is not being used.
Verification Offers Further Assurance Verasight Panelists are Human
In addition to the five suggestions Westwood provides, Verasight has also developed verification technology that further guards against AI agents. First, all respondents must confirm a U.S. mobile phone number linked to a paid carrier (e.g., no Google Voice or VOIP). Second, we document and monitor the source of every panel signup to ensure only approved signups occur. Finally, we check that respondent information aligns with publicly available information, such as the voter file.
In other words, in order to deploy an AI Agent in the Verasight verified panel, a person would have to: 1.) provide a verified cell phone number associated with a U.S. major carrier; 2.) provide details that we can use to match verified public records, such as whether or not the individual is registered to vote or owns a car; 3.) consistently take surveys from the location of registration; 4.) provide internally consistent responses across surveys (i.e., age, voter status, number of family members, etc., do not change in implausible ways); and 5.) avoid raising any Verasight data quality or AI flags in any survey, including flags of implausible accuracy which Westwood’s PNAS article documents. Although extremely unlikely, even if a sophisticated AI agent could somehow pass all these scenarios, given our focus on limiting the number of surveys that a panelist can take in a given week, there would be no upside to deploying a sophisticated AI Agent.
.png)


