2021
DOI: 10.1177/10892680211015635
|View full text |Cite
|
Sign up to set email alerts
|

Context Dependency as a Predictor of Replicability

Abstract: We scrutinize the argument that unsuccessful replications—and heterogeneous effect sizes more generally—may reflect an underappreciated influence of context characteristics. Notably, while some of these context characteristics may be conceptually irrelevant (as they merely affect psychometric properties of the measured/manipulated variables), others are conceptually relevant as they qualify a theory. Here, we present a conceptual and analytical framework that allows researchers to empirically estimate the exte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 70 publications
0
5
0
Order By: Relevance
“…3, para. 1) disregards the (in our view, important) distinction between conceptually relevant and conceptually irrelevant design space dimensions (see Gollwitzer & Schwabe, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…3, para. 1) disregards the (in our view, important) distinction between conceptually relevant and conceptually irrelevant design space dimensions (see Gollwitzer & Schwabe, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Against this background, it also has to be mentioned here that a significant number of participants was excluded as the experimental manipulation has not worked for them, i.e., their perceived availability of COVID-19 vaccines was not affected by the experimental condition. On one hand, using manipulation checks as an exclusion criterion guarantees that the participants under research have been influenced by the given information in the intended direction, which is a necessary precondition to draw any causal conclusions between the experimental variation and the variation on the dependent variables [ 32 , 33 ]. On the other hand, this practice is increasingly criticized in some social science disciplines [ 42 ] because of the risk of increasing biases between the sub-samples.…”
Section: Discussionmentioning
confidence: 99%
“…Following the recommendations of Leiner (2019), participants with a high relative speed index (RSI > 2), which is a measurement provided by the survey platform used and is based on processing time that identifies suspicious data patterns associated with poor quality, and reading times of equal to or less than 2 SD below the sample mean were excluded from data analysis [30,31]. There is a growing body of criticism on doing experimental research without controlling for a successful experimental manipulation within the sample under research [32,33]. Given that people's prior knowledge and information could have strongly influenced whether people believed the treatment texts or not, I preregistered the manipulation check variable as an exclusion criterion, as done in other experimental research [34][35][36].…”
Section: Exclusion Criteriamentioning
confidence: 99%
“…A crisis in science has, however, been emerging over the past two decades with the realization that scientists attempting to replicate many pivotal studies could not (Baker, 2016). This problem exists in the behavioral as well as in the biological sciences (Gollwitzer & Schwabe, 2021; Wiggins & Christopherson, 2019). For instance, a massive project involving 270 researchers using methodologically rigorous designs found that less than half of 90 major studies published in three top-ranking psychology journals were able to be replicated (Open Science Collaboration, 2015).…”
Section: Major Findingsmentioning
confidence: 99%