2018
DOI: 10.31234/osf.io/9654g
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Many Labs 2: Investigating Variation in Replicability Across Sample and Setting

Abstract: We conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer reviewed in advance to examine variation in effect magnitudes across sample and setting. Each protocol was administered to approximately half of 125 samples and 15,305 total participants from 36 countries and territories. Using conventional statistical significance (p < .05), fifteen (54%) of the replications provided evidence in the same direction and statistically significant as the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
161
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 81 publications
(165 citation statements)
references
References 51 publications
3
161
1
Order By: Relevance
“…First, the HRWP allows researchers to test the effect of an experimental condition for each participant in a study. Thus, the results of an HRWP study could be seen as akin to recent replication projects involving many studies (Klein et al, ; Klein et al, ; Open Science Collaboration, ). But instead of each replication being a study, in the HRWP study, each replication is an individual.…”
Section: Discussionmentioning
confidence: 89%
“…First, the HRWP allows researchers to test the effect of an experimental condition for each participant in a study. Thus, the results of an HRWP study could be seen as akin to recent replication projects involving many studies (Klein et al, ; Klein et al, ; Open Science Collaboration, ). But instead of each replication being a study, in the HRWP study, each replication is an individual.…”
Section: Discussionmentioning
confidence: 89%
“…Their results paint a rather grim picture of the reliability of psychological research: while 97% of the original studies found significant results, only 36% of the replications were able to reproduce these significant findings. In the Many Labs 2 project, 15 of the 28 attempted replications provided evidence in the same direction as the original finding and statistically significant at the 5% level (Klein et al 2018). 49 See also Maniadis, Tufano, and List (2017) for a systematic review on the problem of information revelation in science.…”
Section: Replicate Early and Oftenmentioning
confidence: 80%
“…A reproducibility rate of 36% was reported by the Open Science Framework for 100 findings from psychological studies (Aarts et al 2015), and a reproducibility rate of 54% for 28 classic findings in psychological science was reported by a more recent Many Lab project (Klein et al 2018b). Such poor reproducibility has been partly attributed to reporting bias and potentially problematic practices such as selective reporting of outcomes (Aarts et al 2015;Baker 2016;Bakker et al 2012;Ioannidis et al 2014;Ioannidis 2005;Ioannidis 2008;John et al 2012;Simmons et al 2011).…”
Section: Discussionmentioning
confidence: 98%
“…Varying demographic and/or clinical composition of datasets is another factor likely to influence the reproducibility of findings in human neuroscience, even for such fundamental processes as age-related change in neural structure (LeWinn et al 2017). However, one study that investigated variation in replicability suggested that the contribution of sample heterogeneity can also be modest (Klein et al 2018b). As noted above and in the Methods, the datasets of the current study differed widely in their age ranges and distributions, and the recruitment criteria were variable too, especially whether subjects were selected as healthy controls without certain psychiatric diseases, or recruited just in the context of unselected population studies.…”
Section: Discussionmentioning
confidence: 99%