2020
DOI: 10.31222/osf.io/2gurz
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deciding what to replicate: A decision model for replication study selection under resource and knowledge constraints.

Abstract: Robust scientific knowledge is contingent upon replication of original findings. However, researchers who conduct replication studies face a difficult problem; there are many more studies in need of replication than there are funds available for replicating. To select studies for replication efficiently, we need to understand which studies are the most in need of replication. In other words, we need to understand which replication efforts have the highest expected utility. In this article we propose a general … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
40
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 18 publications
(42 citation statements)
references
References 55 publications
2
40
0
Order By: Relevance
“…In fact, a team science approach could be targeted at different stages of the research process. For example, researchers in a specific domain could crowdsource the most relevant original hypotheses to test or existing findings to replicate, although more princpled ways to arrive at the latter have also been proposed (e.g., Field, Hoekstra, Bringmann, & van Ravenzwaaij 2019 ; Isager et al, 2020 ). I already discussed the project by Landy and colleagues ( 2020 ) who crowdsourced study designs to test a set of hypotheses.…”
Section: Discussionmentioning
confidence: 99%
“…In fact, a team science approach could be targeted at different stages of the research process. For example, researchers in a specific domain could crowdsource the most relevant original hypotheses to test or existing findings to replicate, although more princpled ways to arrive at the latter have also been proposed (e.g., Field, Hoekstra, Bringmann, & van Ravenzwaaij 2019 ; Isager et al, 2020 ). I already discussed the project by Landy and colleagues ( 2020 ) who crowdsourced study designs to test a set of hypotheses.…”
Section: Discussionmentioning
confidence: 99%
“…Although lower error rates would establish claims more convincingly, this would also require more resources. One might speculate that in research areas where not every claim is important enough to warrant investing the resources required to establish claims with low error rates (Isager et al, 2020), an alpha level of 5% has a pragmatic function in facilitating conjectures and refutations in fields that otherwise lack a coordinated approach to knowledge generation, but are faced with limited resources.…”
Section: The Convention Of An Alpha Level Of 005mentioning
confidence: 99%
“…Pittelkow et al [38] follow a similar procedure, applied to studies from clinical psychology. Isager et al [39] outline a model for deciding on the utility of replicating a study, given known costs, by calculating the value of a claim and the uncertainty about that claim prior to replication. The key variables in this model -costs, value, and uncertainty -remain undefined, with the expectation that each can be specified outside the model (as relevant to a given knowledge domain).…”
Section: Replicats As a Model For Allocating Replication Effortmentioning
confidence: 99%