open access publication

Article, Early Access, 2023

Do mturkers collude in interactive online experiments?

BEHAVIOR RESEARCH METHODS, ISSN 1554-351X, 1554-351X, 10.3758/s13428-023-02220-3

Contributors

Ghita, Razvan S. (Corresponding author) [1]

Affiliations

  1. [1] Southern Denmark Univ, Dept Business & Management, Univ Pk 1, DK-6000 Kolding, Denmark
  2. [NORA names: SDU University of Southern Denmark; University; Denmark; Europe, EU; Nordic; OECD]

Abstract

One of the issues that can potentially affect the internal validity of interactive online experiments that recruit participants using crowdsourcing platforms is collusion: participants could act upon information shared through channels that are external to the experimental design. Using two experiments, I measure how prevalent collusion is among MTurk workers and whether collusion depends on experimental design choices. Despite having incentives to collude, I find no evidence that MTurk workers collude in the treatments that resembled the design of most other interactive online experiments. This suggests collusion is not a concern for data quality in typical interactive online experiments that recruit participants using crowdsourcing platforms. However, I find that approximately 3% of MTurk workers collude when the payoff of collusion is unusually high. Therefore, collusion should not be overlooked as a possible danger to data validity in interactive experiments that recruit participants using crowdsourcing platforms when participants have strong incentives to engage in such behavior.

Keywords

Amazon mechanical turk, Behavioral research, Collusion, Experimental methodology, Internet interactive experiments

Data Provider: Clarivate