The success of Amazon Mechanical Turk (MTurk) as an online research platform has come at a price: MTurk has suffered from slowing rates of population replenishment, and growing participant non-naivety. Recently, a number of alternative platforms have emerged, offering capabilities similar to MTurk but providing access to new and more naïve populations. After surveying several options, we empirically examined two such platforms, CrowdFlower (CF) and Prolific Academic (ProA). In two studies, we found that participants on both platforms were more naïve and less dishonest compared to MTurk participants. Across the three platforms, CF provided the best response rate, but CF participants failed more attention-check questions and did not reproduce known effects replicated on ProA and MTurk. Moreover, ProA participants produced data quality that was higher than CF's and comparable to MTurk's. ProA and CF participants were also much more diverse than participants from MTurk.
Data quality is one of the major concerns of using crowdsourcing websites such as Amazon Mechanical Turk (MTurk) to recruit participants for online behavioral studies. We compared two methods for ensuring data quality on MTurk: attention check questions (ACQs) and restricting participation to MTurk workers with high reputation (above 95% approval ratings). In Experiment 1, we found that high-reputation workers rarely failed ACQs and provided higher-quality data than did low-reputation workers; ACQs improved data quality only for low-reputation workers, and only in some cases. Experiment 2 corroborated these findings and also showed that more productive high-reputation workers produce the highest-quality data. We concluded that sampling high-reputation workers can ensure high-quality data without having to resort to using ACQs, which may lead to selection bias if participants who fail ACQs are excluded post-hoc.
We examine key aspects of data quality for online behavioral research between selected platforms (Amazon Mechanical Turk, CloudResearch, and Prolific) and panels (Qualtrics and Dynata). To identify the key aspects of data quality, we first engaged with the behavioral research community to discover which aspects are most critical to researchers and found that these include attention, comprehension, honesty, and reliability. We then explored differences in these data quality aspects in two studies (
N
~ 4000), with or without data quality filters (approval ratings). We found considerable differences between the sites, especially in comprehension, attention, and dishonesty. In Study 1 (without filters), we found that only Prolific provided high data quality on all measures. In Study 2 (with filters), we found high data quality among CloudResearch and Prolific. MTurk showed alarmingly low data quality even with data quality filters. We also found that while reputation (approval rating) did not predict data quality, frequency and purpose of usage did, especially on MTurk: the lowest data quality came from MTurk participants who report using the site as their main source of income but spend few hours on it per week. We provide a framework for future investigation into the ever-changing nature of data quality in online research, and how the evolving set of platforms and panels performs on these key aspects.
Confessions are people's way of coming clean, sharing unethical acts with others. Although confessions are traditionally viewed as categorical-one either comes clean or not-people often confess to only part of their transgression. Such partial confessions may seem attractive, because they offer an opportunity to relieve one's guilt without having to own up to the full consequences of the transgression. In this article, we explored the occurrence, antecedents, consequences, and everyday prevalence of partial confessions. Using a novel experimental design, we found a high frequency of partial confessions, especially among people cheating to the full extent possible. People found partial confessions attractive because they (correctly) expected partial confessions to be more believable than not confessing. People failed, however, to anticipate the emotional costs associated with partially confessing. In fact, partial confessions made people feel worse than not confessing or fully confessing, a finding corroborated in a laboratory setting as well as in a study assessing people's everyday confessions. It seems that although partial confessions seem attractive, they come at an emotional cost.
While individual differences in decision-making have been examined within the social sciences for several decades, this research has only recently begun to be applied by computer scientists to examine privacy and security attitudes (and ultimately behaviors). Specifically, several researchers have shown how different online privacy decisions are correlated with the "Big Five" personality traits. However, in our own research, we show that the five factor model is actually a weak predictor of privacy preferences and behaviors, and that other well-studied individual differences in the psychology literature are much stronger predictors. We describe the results of several experiments that showed how decisionmaking style and risk-taking attitudes are strong predictors of privacy attitudes, as well as a new scale that we developed to measure security behavior intentions. Finally, we show that privacy and security attitudes are correlated, but orthogonal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.