2018
DOI: 10.1371/journal.pone.0208177
|View full text |Cite
|
Sign up to set email alerts
|

Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI

Abstract: The importance of integrating research findings is incontrovertible and procedures for coordinate-based meta-analysis (CBMA) such as Activation Likelihood Estimation (ALE) have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. As meta-analytical findings help building cumulative knowledge and guide future research, not only the quality of such analyses but also the way conclusions are drawn is extremely important. Like classical meta-analyses, coordinate-b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
137
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 118 publications
(139 citation statements)
references
References 43 publications
2
137
0
Order By: Relevance
“…We might hypothesize that the dataset of the experiments included in the COA-O analysis could be just a subsample of all the possible evidences about GM changes of opposite polarity, thus biasing our results. To assess their robustness, we implemented a modified version of the Fail-safe technique (Acar et al, 2018). The ratio behind that is that we cannot represent the COA-O network as if we gathered all the possible experiments on the matter, but we still can observe what would happen if our sample was much larger.…”
Section: Fail-safe and Dummy-pairs Analysesmentioning
confidence: 99%
“…We might hypothesize that the dataset of the experiments included in the COA-O analysis could be just a subsample of all the possible evidences about GM changes of opposite polarity, thus biasing our results. To assess their robustness, we implemented a modified version of the Fail-safe technique (Acar et al, 2018). The ratio behind that is that we cannot represent the COA-O network as if we gathered all the possible experiments on the matter, but we still can observe what would happen if our sample was much larger.…”
Section: Fail-safe and Dummy-pairs Analysesmentioning
confidence: 99%
“…One possible reason for these inconsistent findings is sample size, which was less than 30 in all previous individual experiments with only a few exceptions (Badzakova‐Trajkov et al, ; Kessler et al, ; Rymarczyk et al, ), because studies involving small sample sizes may have low statistical power and a reduced chance of detecting effects (Button et al, ). This issue could be relevant even to meta‐analyses (Arsalidou et al, ; Zinchenko et al, ) because these studies employed coordinate‐based meta‐analytical methods, which assess the convergence of the locations of activation foci reported in individual studies and have difficulty detecting small effects with underpowered individual studies (Acar, Seurinck, Eickhoff, & Moerkerke, ). Another possible reason for these inconsistencies is that a majority of previous studies have compared dynamic with static facial expressions.…”
Section: Introductionmentioning
confidence: 99%
“…One of the strengths of this review concerns the use of the robust ALE methodology to meta‐analyze imaging results in neuromodulation treatment. Another strength of this paper comprises the homogeneity of the neuroimaging methods used in the included papers.…”
Section: Discussionmentioning
confidence: 99%