2018
DOI: 10.2308/bria-52340
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing Intelligent Research Participants: A Student versus MTurk Comparison

Abstract: The use of online workers as research participants has grown in recent years, prompting interest in how online workers compare to traditional accounting research participants. To date, no study has compared the relative intelligence of online workers to student subjects. Such a comparison may be important to behavioral accounting researchers given the homogeneity of accounting students relative to online subject pools and given prior research suggesting accounting students have relatively high analytic ability… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(18 citation statements)
references
References 45 publications
1
17
0
Order By: Relevance
“…Nearly half of the participant groups fall under the category of "non-specific participant", where researchers did not require participants to meet any specific technical qualification criteria. This is consistent with the conclusions in other studies: Hunt & Scheetz (2019) believe crowdsourcing platforms are best suited for obtaining average individuals within society; Farrell et al (2017) conclude that online workers can be suitable proxies in accounting research that investigates the decisions of non-experts; and Buchheit et al (2019) find that online workers are good research participants when fluid intelligence (defined in their article as general reasoning and problem-solving ability) is needed for reasonably complex experimental tasks in which incoming knowledge is not critical. However, researchers also raise potentially significant issues with the general MTurk population.…”
Section: Overview Of Mturksupporting
confidence: 90%
“…Nearly half of the participant groups fall under the category of "non-specific participant", where researchers did not require participants to meet any specific technical qualification criteria. This is consistent with the conclusions in other studies: Hunt & Scheetz (2019) believe crowdsourcing platforms are best suited for obtaining average individuals within society; Farrell et al (2017) conclude that online workers can be suitable proxies in accounting research that investigates the decisions of non-experts; and Buchheit et al (2019) find that online workers are good research participants when fluid intelligence (defined in their article as general reasoning and problem-solving ability) is needed for reasonably complex experimental tasks in which incoming knowledge is not critical. However, researchers also raise potentially significant issues with the general MTurk population.…”
Section: Overview Of Mturksupporting
confidence: 90%
“…Participants (N = 332, M age = 37.07, SD age = 11.36, 36% female, 79% United States) 1 were recruited from Amazon's MTurk and provided monetary compensation. Prior research has supported the validity of findings obtained from MTurk participants, and we applied exclusion guidelines from these sources to ensure sufficient data quality ( Barends & de Vries, 2019 ; Buchheit, Dalton, Pollard, & Stinson, 2019 ). Only participants that had completed more than 50 MTurk tasks with greater than 95% lifetime approval were included.…”
Section: Methodsmentioning
confidence: 99%
“…However, they may have distortions derived from the inclusion of false information (e.g., identity) and from inattention in the answers provided (Fleischer et al, 2015;Wessling et al, 2017). To deal with potential biases in the use of MTurk, it is recommended that participants with higher intelligence scores are chosen (Buchheit et al, 2019). In this study intelligence scores are higher than 90%.…”
Section: Participants and Proceduresmentioning
confidence: 99%