2018
DOI: 10.31234/osf.io/xyks4
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How do academics assess the results of multiple experiments?

Abstract: We studied how academics assess the results of a set of four experiments that all test a given theory. We found that participants’ belief in the theory increases with the number of significant results, and that direct replications were considered to be more important than conceptual replications. We found no difference between authors and reviewers in their propensity to submit or recommend to publish sets of results, but we did find that authors are generally more likely to desire an additional experiment. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Proportion of research questions with more true results. Scientists sometimes assess evidence for research questions using heuristic tallies of positive and negative results 38 . As such, the proportion of questions with more true than false results is a useful metric for evaluating the proportion of questions for which scientists will acquire accurate beliefs.…”
Section: Resultsmentioning
confidence: 99%
“…Proportion of research questions with more true results. Scientists sometimes assess evidence for research questions using heuristic tallies of positive and negative results 38 . As such, the proportion of questions with more true than false results is a useful metric for evaluating the proportion of questions for which scientists will acquire accurate beliefs.…”
Section: Resultsmentioning
confidence: 99%
“…This, in turn, reduces the chances of obtaining false-positive results ( Wicherts et al, 2016 ), thus increasing replicability rates ( Munafò, 2016 ; Munafò et al, 2017 ). Given that replications significantly contribute to trust in psychological theories ( van den Akker et al, 2018 ) and that low replicability has been shown to impair public trust in science ( Anvari and Lakens, 2018 ; Chopik et al, 2018 ; Hendriks et al, 2020 ; Wingen et al, 2020 ), increased trust is thus a likely result of OSPs. It should, however, be noted that some of these assumptions cannot be tested empirically, and that at least one study found that low replicability does not have much of a detrimental effect on public perceptions of science ( Mede et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…Response rates and invitation procedure. Response rates were rather low in many studies that relied on researchers as sample (e.g., [9,26,38,39,53,54,60]). Based on these studies and based on the insights from our pilot study (see section pilot study), we anticipated a response rate of around 10%.…”
Section: Plos Onementioning
confidence: 99%