2010
DOI: 10.1016/j.jspi.2009.09.017
|View full text |Cite
|
Sign up to set email alerts
|

How does the DerSimonian and Laird procedure for random effects meta-analysis compare with its more efficient but harder to compute counterparts?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
164
0
3

Year Published

2013
2013
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 166 publications
(171 citation statements)
references
References 24 publications
4
164
0
3
Order By: Relevance
“…42 REML is more sensitive in meta-analyses including smaller studies. 43 Mean effect sizes obtained were reversed and a positive effect of CBT was represented by a positive effect size, and vice-versa. The threshold for statistical significance was an alpha value of 0.05.…”
Section: Meta-analysismentioning
confidence: 98%
“…42 REML is more sensitive in meta-analyses including smaller studies. 43 Mean effect sizes obtained were reversed and a positive effect of CBT was represented by a positive effect size, and vice-versa. The threshold for statistical significance was an alpha value of 0.05.…”
Section: Meta-analysismentioning
confidence: 98%
“…5 The accessibility of the DerSimonian-Laird (DL) method and its inclusion in common meta-analysis software such as RevMan 6 has led to it being the most common method for using random effects in metaanalyses, and it is a fairly reliable approximation when the number of studies is large. 7 Note that failure to account for the heterogeneity between studies will result in an underestimation of the variability of the overall effect h 5 This means that leaving out the random effect leads to us assuming that we have a more precise measure of h than we really do, resulting in an inflated rate of false positives. In the meta-analytic application in Elgandy et al, a false positive would be where one declares that the type of MPI test (appropriate vs inappropriate) indicates a statistically significant difference in the probability of an outcome (e.g., abnormal test or ischemia) when in fact none exists.…”
Section: See Related Article Pp 680-689mentioning
confidence: 99%
“…It has been noted that the DL method can severely underestimate D 2 when the underlying proportion is near zero or one, 8 or if the number of studies is small (\20). 7,8 The method has also been found to produce an inflated false positive rate for the overall conclusion when there is a large variation in the sample sizes of the included studies; a tenfold difference in study size was found to produce results with very poor statistical properties using the DL method. 9 The majority of these issues are due to the original DL method being based on a simple approximation, which does behave well when the number of studies is large and the studies themselves are fairly uniform.…”
Section: See Related Article Pp 680-689mentioning
confidence: 99%
See 1 more Smart Citation
“…The inherent differences between studies introduce the concept of heterogeneity among the effect sizes. To account for this heterogeneity in computing the summary correlation, we used the inverse variance method for pooling the correlations, and we used the DerSimonian-Laird method to estimate the heterogeneity variance among studies (Jackson, Bowden, & Baker, 2009). Under the DerSimonian-Lair method let Yi be the treatment effect of the ith study where…”
Section: Data Conversionmentioning
confidence: 99%