Publication Bias in Meta‐Analysis 2005
DOI: 10.1002/0470870168.ch7
|View full text |Cite
|
Sign up to set email alerts
|

Failsafe N or File‐Drawer Number

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
117
0
1

Year Published

2011
2011
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 156 publications
(121 citation statements)
references
References 0 publications
3
117
0
1
Order By: Relevance
“…More recently, Aguinis et al (2011) also debunked the myth that the failsafe N analysis is an effective indicator of publication bias. Similar caveats apply to modifications (e.g., Orwin, 1983) of Rosenthal's original failsafe N technique (Becker, 2005;Higgins & Green, 2009). Unfortunately, despite this evidence, failsafe N techniques appear to be the predominantly used method to detect the potential presence of publication bias in the organizational sciences (see Table 1).…”
Section: Methods For Detecting and Assessing Publication Biasmentioning
confidence: 88%
See 1 more Smart Citation
“…More recently, Aguinis et al (2011) also debunked the myth that the failsafe N analysis is an effective indicator of publication bias. Similar caveats apply to modifications (e.g., Orwin, 1983) of Rosenthal's original failsafe N technique (Becker, 2005;Higgins & Green, 2009). Unfortunately, despite this evidence, failsafe N techniques appear to be the predominantly used method to detect the potential presence of publication bias in the organizational sciences (see Table 1).…”
Section: Methods For Detecting and Assessing Publication Biasmentioning
confidence: 88%
“…Kepes et al 629 Traditional Methods for Detecting and Assessing Publication Bias Failsafe N. Originally introduced by Rosenthal (1979), the failsafe N technique attempts to estimate the number of missing effect sizes that would be needed to make a meta-analytic mean effect size estimate statistically insignificant. The technique has several critical limitations, which were discussed more than a decade ago (Becker, 1994(Becker, , 2005Evans, 1996). For instance, the failsafe N assumes that all missing effect sizes are zero, which is improbable.…”
Section: Methods For Detecting and Assessing Publication Biasmentioning
confidence: 99%
“…To assess the robustness of our results, we conducted a sensitivity analysis to determine how sensitive the combined estimate was to any 1 study by repeatedly calculating the overall ES with 1 study omitted per iteration and compared the results with the overall study effect. We analyzed the threat of possible publication bias to the validity of the obtained outcomes using the funnel plot, 30 failsafe N, 31 and the trim and fill. 32 The failsafe N method determines the number of additional "negative" studies (eg studies showing no difference between ASD and non-ASD groups) needed to reduce the overall test to nonsignificance.…”
Section: Resultsmentioning
confidence: 99%
“…63,85 The failsafe N or filedrawer number estimates the number of studies that would need to be included in the meta-analysis to change the overall results. 55,82 The Duval and Tweedie trim-and-fill method assumes that the most undesirable studies are missing. 86 An asymmetric appearance of many missing studies suggest publication or small-sample bias.…”
Section: Publication Biasmentioning
confidence: 99%
“…Publication bias was assessed using 3 techniques, the funnel plot, 81 failsafe N, 82 and the Duval and Tweedie trimand-fill method. 83,84 In the absence of publication bias, the distribution of effect sizes in the funnel plot are symmetrical and take on an inverted funnel shape.…”
Section: Publication Biasmentioning
confidence: 99%