2020
DOI: 10.1002/jrsm.1448
|View full text |Cite
|
Sign up to set email alerts
|

Estimating the prevalence of missing experiments in a neuroimaging meta‐analysis

Abstract: Coordinate-based meta-analyses (CBMA) allow researchers to combine the results from multiple functional magnetic resonance imaging experiments with the goal of obtaining results that are more likely to generalize. However, the interpretation of CBMA findings can be impaired by the file drawer problem, a type of publication bias that refers to experiments that are carried out but are not published. Using foci per contrast count data from the BrainMap database, we propose a zero-truncated modeling approach that … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(17 citation statements)
references
References 37 publications
0
17
0
Order By: Relevance
“…It was applied for the estimation of the robustness against unpublished neuroimaging findings. A recent study using the data from BrainMap provides evidence for the existence of a file drawer effect, with the rate of missing contrasts estimated as at least 6 per 100 reported 50 …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It was applied for the estimation of the robustness against unpublished neuroimaging findings. A recent study using the data from BrainMap provides evidence for the existence of a file drawer effect, with the rate of missing contrasts estimated as at least 6 per 100 reported 50 …”
Section: Methodsmentioning
confidence: 99%
“…A recent study using the data from BrainMap provides evidence for the existence of a file drawer effect, with the rate of missing contrasts estimated as at least 6 per 100 reported. 50 Therefore, the convergence meta‐analysis was retested starting with an additional 6% noise to evaluate the robustness of the identified clusters. The surviving clusters were then retested, with a noise rate of up to 30%, as in the previous study.…”
Section: Methodsmentioning
confidence: 99%
“…The number of foci and subjects of those experiments were such to match the distribution they had in the original data. Samartsidis and colleagues (Samartsidis et al, 2020) estimated that the fail-drawer effect of the BrainMap database amounts to the 6%. Thus, we decided to perform a series of Fail-Safe analyses adding the 6% and the 60% of random experiment to each dataset, so as to evaluate the robustness of our results.…”
Section: [Experiments Context Is Normal Mapping] and [Experiments Activmentioning
confidence: 99%
“…Secondly, papers with non-significant results are less likely to be published. This publishing bias is also known as the ‘file-drawer problem’ ( Rosenthal, 1979 ; Ioannidis et al, 2014 ; de Winter and Dodou, 2015 ; for evidence in fMRI studies, see Jennings and Van Horn, 2012 ; Acar et al, 2018 ; David et al, 2018 ; Samartsidis et al, 2020 ). Prejudice against the null hypothesis systematically biases our knowledge of true effects ( Greenwald, 1975 ).…”
Section: Introductionmentioning
confidence: 99%