Evaluating Online Data Collection Platforms Using A Simple Rule-Following Task

High-quality experimental data is crucial for human-subject research. The increasing prevalence of online experimental studies has raised concerns regarding the quality and reliability of data collected in such environments. Recent evidence indicates that data quality varies across popular crowdsour...

Full description

Saved in:
Bibliographic Details
Published inEconomics letters Vol. 255; p. 112509
Main Authors Suri, Dominik, Kube, Sebastian, Schultz, Johannes
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2025
Subjects
Online AccessGet full text
ISSN0165-1765
DOI10.1016/j.econlet.2025.112509

Cover

More Information
Summary:High-quality experimental data is crucial for human-subject research. The increasing prevalence of online experimental studies has raised concerns regarding the quality and reliability of data collected in such environments. Recent evidence indicates that data quality varies across popular crowdsourcing platforms. We test if this also holds for less complex, easily comprehensible tasks. We find that compliance rates in a simple rule-following task are significantly lower in our Amazon Mechanical Turk sample compared to those from Prolific Academic and a German university lab. •Behavioral data quality varies across online crowdsourcing platforms.•Participants conduct a simple rule-following task, namely the coins-task.•We recruit participants on MTurk, Prolific and from a university lab database.•Data quality and rule compliance on MTurk is different compared to the other two.•Our findings align with other studies showing decreased data quality on MTurk.
ISSN:0165-1765
DOI:10.1016/j.econlet.2025.112509