It’s beneficial for data quality when respondents feel fairly compensated.
We value our respondents’ insights, and want them to be compensated appropriately for their time.
Recently, we asked respondents from Verasight and 2 large, popular sample providers– Company A and Company B –“Compared to other survey panels, how fairly do you believe [this panel] compensates you for your survey responses?” While Verasight handles all compensation to our respondents in-house, both Company A and Company B use respondents from multiple sources. Because of this, compensation on a given survey may vary within Company A and Company B respondents depending on the panel from which an individual respondent is sourced.
Compared to other survey panels, how fairly do you believe that [company] compensates you for your survey responses?
Our results show that over half of Verasight respondents (52%) report feeling that Verasight pays them more fairly than other survey panels that they are a member of, compared to about a third of respondents from Company A (34%) and Company B (29%).
At Verasight, respondents earn, on average, $1 per ~10 minute survey (plus many other opportunities to earn rewards). This rate is much higher than many survey sites, which pay approximately one dollar per hour. This fair compensation coupled with a low frequency of survey requests promotes a panel of survey respondents who are excited to take occasional surveys with Verasight, rather than a panel of individuals looking to maximize compensation by speeding through surveys with low quality responses. The fact that individuals cannot cash out very quickly also means that the Verasight panel is not appealing to “professional” survey takers or those looking only to take as many surveys as possible in a short period of time to maximize rewards.
Reach out to Verasight today to learn more about how we help researchers collect high quality data.
Survey Details:
Verasight:
The Benchmarking Survey A was conducted by Verasight from January 30 - February 23, 2023. The sample size is 1,000 respondents. All respondents were recruited from an existing Verasight panel, composed of individuals recruited via both address-based probability sampling and online advertisements.
The data are weighted to match the Current Population Survey on age, race/ethnicity, sex, income, education, region, and metropolitan status, as well as to population benchmarks of partisanship and 2020 presidential vote. The margin of error, which incorporates the design effect due to weighting, is +/- 3.1%.
To ensure data quality, the Verasight data team implemented a number of quality assurance procedures. This process included screening out responses corresponding to foreign IP addresses, potential duplicate respondents, potential non-human responses, and respondents failing attention and straight-lining checks.
Company A:
The Benchmarking Survey B was conducted by Company A from January 20 - January 26, 2023. The sample size is 1,002 respondents. Respondents were recruited from a variety of vendors sampled through the Company A survey sample marketplace.
The data are weighted to match the Current Population Survey on age, race/ethnicity, sex, income, education, region, and metropolitan status, as well as to population benchmarks of partisanship and 2020 presidential vote. The margin of error, which incorporates the design effect due to weighting, is +/- 3.4%.
Company B:
The Benchmarking Survey C was conducted by Verasight from February 9 - February 23, 2023. The sample size is 1,014 respondents. Respondents were recruited from a variety of vendors sampled through the Company B survey sample marketplace.
The data are weighted to match the Current Population Survey on age, race/ethnicity, sex, income, education, region, and metropolitan status, as well as to population benchmarks of partisanship and 2020 presidential vote. The margin of error, which incorporates the design effect due to weighting, is +/- 3.3%.
To ensure data quality, Company B implemented a number of quality assurance procedures. This process included screening out responses corresponding to respondents who failed straight-lining checks, gave nonsensical answers on open-ended questions, and showed patterned responses.