In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., and others observing null effects (e.g., Tinghög et al., 2013;Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of −0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.
Individuals with autism spectrum condition (ASC) have diagnostic impairments in skills that are associated with an implicit acquisition; however, it is not clear whether ASC individuals show specific implicit learning deficits. We compared ASC and typically developing (TD) individuals matched for IQ on five learning tasks: four implicit learning tasks--contextual cueing, serial reaction time, artificial grammar learning, and probabilistic classification learning tasks--that used procedures expressly designed to minimize the use of explicit strategies, and one comparison explicit learning task, paired associates learning. We found implicit learning to be intact in ASC. Beyond no evidence of differences, there was evidence of statistical equivalence between the groups on all the implicit learning tasks. This was not a consequence of compensation by explicit learning ability or IQ. Furthermore, there was no evidence to relate implicit learning to ASC symptomatology. We conclude that implicit mechanisms are preserved in ASC and propose that it is disruption by other atypical processes that impact negatively on the development of skills associated with an implicit acquisition.
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
In this paper, we provide a domain-general scoping review of the nudge movement by reviewing 422 choice architecture interventions in 156 empirical studies. We report the distribution of the studies across countries, years, domains, subdomains of applicability, intervention types, and the moderators associated with each intervention category to review the current state of the nudge movement. Furthermore, we highlight certain characteristics of the studies and experimental and reporting practices that can hinder the accumulation of evidence in the field. Specifically, we found that 74% of the studies were mainly motivated to assess the effectiveness of the interventions in one specific setting, while only 24% of the studies focused on the exploration of moderators or underlying processes. We also observed that only 7% of the studies applied power analysis, 2% used guidelines aiming to improve the quality of reporting, no study in our database was preregistered, and the used intervention nomenclatures were non-exhaustive and often have overlapping categories. Building on our current observations and proposed solutions from other fields, we provide directly applicable recommendations for future research to support the evidence accumulation on why and when nudges work.
Over the last ten years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgments of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries, and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods, correlate and rotate the dimension reduction solution.
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
The flexibility allowed by the mobilization of technology disintegrated the traditional work-life boundary for most professionals. Whether working from home is the key or impediment to academics’ efficiency and work-life balance became a daunting question for both scientists and their employers. The recent pandemic brought into focus the merits and challenges of working from home on a level of personal experience. Using a convenient sampling, we surveyed 704 academics while working from home and found that the pandemic lockdown decreased the work efficiency for almost half of the researchers but around a quarter of them were more efficient during this time compared to the time before. Based on the gathered personal experience, 70% of the researchers think that in the future they would be similarly or more efficient than before if they could spend more of their work-time at home. They indicated that in the office they are better at sharing thoughts with colleagues, keeping in touch with their team, and collecting data, whereas at home they are better at working on their manuscript, reading the literature, and analyzing their data. Taking well-being also into account, 66% of them would find it ideal to work more from home in the future than they did before the lockdown. These results draw attention to how working from home is becoming a major element of researchers’ life and that we have to learn more about its influencer factors and coping tactics in order to optimize its arrangements.
We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.