2014
DOI: 10.1177/1745691614528518
|View full text |Cite
|
Sign up to set email alerts
|

Expectations for Replications

Abstract: Failures to replicate published psychological research findings have contributed to a "crisis of confidence." Several reasons for these failures have been proposed, the most notable being questionable research practices and data fraud. We examine replication from a different perspective and illustrate that current intuitive expectations for replication are unreasonable. We used computer simulations to create thousands of ideal replications, with the same participants, wherein the only difference across replica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

4
180
0
3

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 203 publications
(190 citation statements)
references
References 41 publications
(52 reference statements)
4
180
0
3
Order By: Relevance
“…Currently, there is considerable discussion in the literature about the (lack of) replicability of a number of results (e.g., Pashler & Wagenmakers, 2012;Peng, 2011;Stanley & Spence, 2014). Replicating the priming effects observed in Experiment 4 would help to alleviate those types of concerns in the present situation.…”
Section: "Different" Trialsmentioning
confidence: 99%
“…Currently, there is considerable discussion in the literature about the (lack of) replicability of a number of results (e.g., Pashler & Wagenmakers, 2012;Peng, 2011;Stanley & Spence, 2014). Replicating the priming effects observed in Experiment 4 would help to alleviate those types of concerns in the present situation.…”
Section: "Different" Trialsmentioning
confidence: 99%
“…In the worst case, our initial findings may turn out as false positives if a comprehensive analysis of all studies fails to obtain statistically significant results. Of course, a certain proportion of replication failures have to be expected, because failed replications can also be due to unknown moderators and naturally occurring variations in effect sizes as a result of measurement and sampling error (Cummings, 2012;LeBel & Paunonen, 2011;Stanley & Spence, 2014). Nevertheless, the mere existence of several failed replications requires a thorough reassessment of the available evidence to reevaluate the validity of our theory.…”
mentioning
confidence: 99%
“…… Thirty-six percent of replications had statistically significant results" (Open Science Collaboration, 2015, p. 4716-1). Subsequent analysis of the replication intention and efforts have suggested logical concerns to do with the expectations regarding replications themselves (Schmidt, 2010;Stanley & Spence, 2014). The logical framework described here may assist in answering logical questions for each project, such as ones regarding the nature of the psychological construct and its relation to psychological phenomena; or systematic aspects of measurement error which are otherwise not able to be tracked in either original studies, or attempted replications.…”
Section: Reproducibility Questionsmentioning
confidence: 99%
“…The logical framework described here may assist in answering logical questions for each project, such as ones regarding the nature of the psychological construct and its relation to psychological phenomena; or systematic aspects of measurement error which are otherwise not able to be tracked in either original studies, or attempted replications. This assessment may be used to feedback information regarding measurement error and/or construct to phenomenon relations, as raised in Schmidt (2010) and Stanley and Spence (2014).…”
Section: Reproducibility Questionsmentioning
confidence: 99%