Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of thirteen classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, ten effects replicated consistently.One effect -imagined contact reducing prejudice -showed weak support for replicability. And two effects -flag priming influencing conservatism and currency priming influencing system justification -did not replicate. We compared whether the conditions such as lab versus online or U.S. versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect. Word Count = 121 words Many Labs 3 Investigating variation in replicability: A "Many Labs" Replication ProjectReplication is a central tenet of science; its purpose is to confirm the accuracy of empirical findings, clarify the conditions under which an effect can be observed, and estimate the true effect size (Brandt et al., 2013; Open Science Collaboration, 2012. Successful replication of an experiment requires the recreation of the essential conditions of the initial experiment. This is often easier said than done. There may be an enormous number of variables influencing experimental results, and yet only a few tested. In the behavioral sciences, many effects have been observed in one cultural context, but not observed in others. Likewise, individuals within the same society, or even the same individual at different times (Bodenhausen, 1990), may differ in ways that moderate any particular result.Direct replication is infrequent, resulting in a published literature that sustains spurious findings (Ioannidis, 2005) and a lack of identification of the eliciting conditions for an effect. While there are good epistemological reasons for assuming that observed phenomena generalize across individuals and contexts in the absence of contrary evidence, the failure to directly replicate findings is problematic for theoretical and practical reasons. Failure to identify moderators and boundary conditions of an effect may result in overly broad generalizations of true effects across situations (Cesario, 2013) or across individuals (Henrich, Heine, & Norenzayan, 2010). Similarly, overgeneralization may lead observations made under laboratory observations to be inappropriately extended to ecological contexts that differ in important ways (Henry, MacLeod, Phillips, & Crawford, 2004). Practically, attempts to closely replicate research findings can reveal important differences in what is considered a direct replication (Schimdt, 2009), thus leading to refinements of the initial theory (e.g., Aronson, 1992, Greenwald et al., 1986. Close replication can also lead to Many Labs 4 the clarification of tacit methodological knowledge that is necessary to elicit the effect of interest (Collins,...
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of thirteen classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, ten effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects – flag priming influencing conservatism and currency priming influencing system justification – did not replicate. We compared whether the conditions such as lab versus online or U.S. versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect.
Forecasting advice from human advisors is often utilized more than advice from automation. There is little understanding of why “algorithm aversion” occurs, or specific conditions that may exaggerate it. This paper first reviews literature from two fields—interpersonal advice and human–automation trust—that can inform our understanding of the underlying causes of the phenomenon. Then, an experiment is conducted to search for these underlying causes. We do not replicate the finding that human advice is generally utilized more than automated advice. However, after receiving bad advice, utilization of automated advice decreased significantly more than advice from humans. We also find that decision makers describe themselves as having much more in common with human than automated advisors despite there being no interpersonal relationship in our study. Results are discussed in relation to other findings from the forecasting and human–automation trust fields and provide a new perspective on what causes and exaggerates algorithm aversion.
Interpreting a failure to replicate is complicated by the fact that the failure could be due to the original finding being a false positive, unrecognized moderating influences between the original and replication procedures, or faulty implementation of the procedures in the replication. One strategy to maximize replication quality is involving the original authors in study design. We (N = 17 Labs and N = 1,550 participants, after exclusions) experimentally tested whether original author involvement improved replicability of a classic finding from Terror Management Theory (Greenberg et al., 1994). Our results were non-diagnostic of whether original author involvement improves replicability because we were unable to replicate the finding under any conditions. This suggests that the original finding was either a false positive or the conditions necessary to obtain it are not fully understood or no longer exist. Data, materials, analysis code, preregistration, and supplementary documents can be found on the OSF page: https://osf.io/8ccnw/
Three experiments examined three factors that may impede the discovery of hidden profiles: commitment to initial decision, reiteration effect, and ownership bias. Experiment 1 examined whether groups in which members are not asked to make an initial decision before group discussion are more likely to uncover hidden profiles than groups in which members are asked to make an initial decision. Experiment 2 examined this commitment to an initial decision and also the repetition of information for individuals. Experiment 3 explored the reiteration effect in groups and examined whether information that is usually repeated more in groups is viewed as more truthful. Experiments 1 and 2 found no support for the commitment to initial decision hypothesis for uncovering hidden profiles. Experiment 2 found that repetition of `common'information significantly reduced individuals' ability to uncover hidden profiles. Experiment 3 found that information individuals owned (both common and unique) before discussion was rated as more valid than other information. Experiment 3 did not find that common information, which is generally repeated more, was rated as more valid than unique information. Limitations of the current studies and suggestions for future research are discussed.
This article reviews research that examines the use of language in small interacting groups and teams. We propose a model of group inputs (e.g., status), processes and emergent states (e.g., cohesion, influence, and innovation), and outputs (e.g., group effectiveness and member well-being) to help structure our review. The model is integrated with how language is used by groups to both reflect group inputs but also to examine how language interacts with inputs to affect group processes and create emergent states in groups, and then ultimately helps add value to the group with outputs (e.g., performance). Using cross-disciplinary research, our review finds that language is integral to how groups coordinate, interrelate, and adapt. For example, language convergence is related to increased group cohesion and group performance. Our model provides the theoretical scaffolding to consider language use in interacting small groups and suggests areas for future research.
This paper expands research on the judge advisor system (JAS) by examining advice utilization and trust. Experiment 1 examined five factors that could increase utilization of expert advice: judge's trust in the advisor, advisor confidence, advisor accuracy, judge's prior relationship with the advisor, and judge's power to set payment to the advisor. While judge's trust and advisor confidence correlated with the judge matching the advisor's advice, a stepwise regression found that of the five variables, advisor confidence was the only significant predictor of the judge matching the advisor. Experiment 2 examined trust without the role assignment to judge or advisor. While trust expressed in partner was not higher for the judge than the advisor in Experiment 1, in Experiment 2 trust in partner expressed by the low-expertise dyad member was higher than trust expressed by the high-expertise dyad member. Results from the two experiments are discussed in the context of Sniezek and Van Swol (2001).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.