Abstract:Many Labs 3 (Ebersole et al., 2016) failed to replicate a classic finding from the Elaboration Likelihood Model of persuasion (Cacioppo, Petty, & Morris, 1983; Study 1). Petty and Cacioppo (2016) noted possible limitations of the Many Labs 3 replication (Ebersole et al., 2016) based on the cumulative literature. Luttrell, Petty, and Xu (2017) subjected some of those possible limitations to empirical test. They observed that a revised protocol obtained evidence consistent with the original finding that the Many… Show more
“…Luttrell et al (2017) successfully replicated the Cacioppo et al (1983) NC Â AQ interaction when using the optimal procedure and also replicated the failure to find an effect when using the Ebersole et al (2016) protocol. Then in another replication literature first, these findings were further supported by an independent multi-lab replication of Luttrell et al (2017) conducted by Ebersole et al (2017). They too failed to find a significant NC Â AQ interaction effect with the Ebersole et al (2016) protocol (replicating their and Luttrell et al's replication failure with those materials), but they did obtain a significant interaction effect with the Luttrell et al (2017) protocol.…”
Section: Evidence Relating Construct Validity and External Validity Tmentioning
confidence: 81%
“…Is this particular replication study a rare outlier, or a comparatively typical representative of the larger replication literature, or does it fall somewhere in between? It is impossible to answer this question because there are so few studies in which critics of replication efforts attempt to empirically validate their speculations, and to date there is just one example in which the replicators attempted to validate the insights of the critics of their replication effort (Ebersole et al, 2017). However, knowing whether critics of replication efforts are correct in their speculations about why a replication effort failed or not could provide valuable insights into the broader implications of disappointing replication rates.…”
Section: Evidence Relating Construct Validity and External Validity Tmentioning
In recent years, psychology has wrestled with the broader implications of disappointing rates of replication of previously demonstrated effects. This article proposes that many aspects of this pattern of results can be understood within the classic framework of four proposed forms of validity: statistical conclusion validity, internal validity, construct validity, and external validity. The article explains the conceptual logic for how differences in each type of validity across an original study and a subsequent replication attempt can lead to replication “failure.” Existing themes in the replication literature related to each type of validity are also highlighted. Furthermore, empirical evidence is considered for the role of each type of validity in non-replication. The article concludes with a discussion of broader implications of this classic validity framework for improving replication rates in psychological research.
“…Luttrell et al (2017) successfully replicated the Cacioppo et al (1983) NC Â AQ interaction when using the optimal procedure and also replicated the failure to find an effect when using the Ebersole et al (2016) protocol. Then in another replication literature first, these findings were further supported by an independent multi-lab replication of Luttrell et al (2017) conducted by Ebersole et al (2017). They too failed to find a significant NC Â AQ interaction effect with the Ebersole et al (2016) protocol (replicating their and Luttrell et al's replication failure with those materials), but they did obtain a significant interaction effect with the Luttrell et al (2017) protocol.…”
Section: Evidence Relating Construct Validity and External Validity Tmentioning
confidence: 81%
“…Is this particular replication study a rare outlier, or a comparatively typical representative of the larger replication literature, or does it fall somewhere in between? It is impossible to answer this question because there are so few studies in which critics of replication efforts attempt to empirically validate their speculations, and to date there is just one example in which the replicators attempted to validate the insights of the critics of their replication effort (Ebersole et al, 2017). However, knowing whether critics of replication efforts are correct in their speculations about why a replication effort failed or not could provide valuable insights into the broader implications of disappointing replication rates.…”
Section: Evidence Relating Construct Validity and External Validity Tmentioning
In recent years, psychology has wrestled with the broader implications of disappointing rates of replication of previously demonstrated effects. This article proposes that many aspects of this pattern of results can be understood within the classic framework of four proposed forms of validity: statistical conclusion validity, internal validity, construct validity, and external validity. The article explains the conceptual logic for how differences in each type of validity across an original study and a subsequent replication attempt can lead to replication “failure.” Existing themes in the replication literature related to each type of validity are also highlighted. Furthermore, empirical evidence is considered for the role of each type of validity in non-replication. The article concludes with a discussion of broader implications of this classic validity framework for improving replication rates in psychological research.
“…It is possible, for example, that differences in the methodologies that were thought to be irrelevant are actually important (Hines et al, 2014). Indeed, a failed replication can lead to a better understanding of a phenomenon if it results in the generation of new hypotheses to explain how the original and replication methodologies produced different results and, critically, leads to follow-up experiments to test these hypotheses (Ebersole et al, 2017). …”
Section: What Does It Mean To Repeat the Methodology?mentioning
The first results from the Reproducibility Project: Cancer Biology suggest that there is scope for improving reproducibility in pre-clinical cancer research.DOI:
http://dx.doi.org/10.7554/eLife.23383.001
“…Fo e a ple, it has ee ell e og ised that epli atio atte pts should ideally inform theory in an iterative way, with theory informing replication design, replication results informing theory, and on again (e.g. Earp & Trafimow, 2015;Ebersole et al, 2017;Klein et al, 2014a). However these points are (unsurprisingly) rarely seen as undermining the aim and practice of replication in general.…”
Section: Theory Development and The Idea Of Progressmentioning
At least since Meehl’s (in)famous 1978 article, the state of theorizing in psychology has often been lamented. Replication studies have been presented as a way of directly supporting theory development by enabling researchers to more confidently and precisely test and update theoretical claims. In this article I use contemporary work from philosophy of science to make explicit and emphasize just how much theory development is required before “good” replication studies can be carried out and show just how little theoretical payoff even good conceptual replications offer. I suggest that in many areas of psychology aiming at replication is misplaced and that instead replication attempts are better seen as exploratory studies that can be used in the cumulative development of theory and measurement procedures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.