2016
DOI: 10.1016/j.ssresearch.2016.04.014
|View full text |Cite
|
Sign up to set email alerts
|

Using crowdsourced online experiments to study context-dependency of behavior

Abstract: We use Mechanical Turk's diverse participant pool to conduct online bargaining games in India and the US. First, we assess internal validity of crowdsourced experimentation through variation of stakes ($0, $1, $4, and $10) in the Ultimatum and Dictator Game. For cross-country equivalence we adjust the stakes following differences in purchasing power. Our marginal totals correspond closely to laboratory findings. Monetary incentives induce more selfish behavior but, in line with most laboratory findings, the pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 90 publications
(130 reference statements)
0
4
0
Order By: Relevance
“…As a side effect, we cannot fully rule out the possibility that differences across samples and contexts may be partly due to differences in monetary incentives. Given the well-documented finding that specific sizes of positive stakes have negligible effects in interactive games of fairness, trust, and reciprocity (Carpenter et al 2005;Keuschnigg et al 2016), it is highly unlikely that stake differences drive our results. In fact, we find the lowest level of prosocial behavior in the setup providing the smallest stakes (Study 4)-a finding that runs counter the idea that prosociality decreases in stake sizes.…”
Section: Discussionmentioning
confidence: 83%
See 1 more Smart Citation
“…As a side effect, we cannot fully rule out the possibility that differences across samples and contexts may be partly due to differences in monetary incentives. Given the well-documented finding that specific sizes of positive stakes have negligible effects in interactive games of fairness, trust, and reciprocity (Carpenter et al 2005;Keuschnigg et al 2016), it is highly unlikely that stake differences drive our results. In fact, we find the lowest level of prosocial behavior in the setup providing the smallest stakes (Study 4)-a finding that runs counter the idea that prosociality decreases in stake sizes.…”
Section: Discussionmentioning
confidence: 83%
“…Critics may find fault at our heterogeneous stake levels pointing to the idea that observed prosociality may decrease in stake sizes. Prior evidence from laboratory (e.g., Camerer and Hogarth 1999;Carpenter, Verhoogen, and Burks 2005) and online studies (e.g., Amir et al 2012;Keuschnigg, Bader, and Bracher 2016), however, indicates that-although monetary stakes increase selfishness compared to unincentivized games-differences in positive stakes have negligible effects on laboratory results in fairness and cooperation research.…”
Section: Incentivesmentioning
confidence: 98%
“…Each participant received a $1 show-up fee, and we incentivized decisions ($2 endowment in the DG, $1 for each player in the TG). Stake levels in this range have proven sufficient to minimize social-desirability effects on MTurk (Keuschnigg, Bader, and Bracher 2016). Participants received on average $1.86.…”
Section: Sampling and Designmentioning
confidence: 99%
“…While many approaches exist to measure social norms in incentivised laboratory and online experiments (e.g. Keuschnigg et al., 2016; Krupa & Weber, 2013) and field experiments (e.g. Baldassarri & Abascal, 2017; Winter & Zhang, 2018), they are less suitable in the context of our study.…”
Section: Introductionmentioning
confidence: 99%