1998
DOI: 10.1037/1082-989x.3.3.339
|View full text |Cite
|
Sign up to set email alerts
|

Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues.

Abstract: Many meta-analysts incorrectly use correlations or standardized mean difference statistics to compute effect sizes on dichotomous data. Odds ratios and their logarithms should almost always be preferred for such data. This article reviews the issues and shows how to use odds ratios in meta-analytic data, both alone and in combination with other effect size estimators. Examples illustrate procedures for estimating the weighted average of such effect sizes and methods for computing variance estimates, confidence… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
251
0
2

Year Published

2000
2000
2019
2019

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 287 publications
(259 citation statements)
references
References 53 publications
3
251
0
2
Order By: Relevance
“…"Fixed effects" models assume that each individual study is sampling the same underlying population effect, and that all of the between study differences are due to measurement noise, sampling error, subtle differences in instructions and so on. "Random effects" models do not assume that all of the underlying studies sample an identical population effect (Borenstein, et al 2010;Cumming, 2012;Haddock, Rindskopf & Shadish, 1998); hence there are sources of variation (demand characteristics seem likely in some of the reaching across the midline sequential tasks, for example, or in our study 2) which will not be identical from study to study. One limitation of random effects methods, however, is that studies with smaller sample sizes can contribute more to the overall effect estimate, as they contribute more to estimates of between study variability (in fixed effects models smaller variances result in larger weights).…”
Section: Studymentioning
confidence: 97%
“…"Fixed effects" models assume that each individual study is sampling the same underlying population effect, and that all of the between study differences are due to measurement noise, sampling error, subtle differences in instructions and so on. "Random effects" models do not assume that all of the underlying studies sample an identical population effect (Borenstein, et al 2010;Cumming, 2012;Haddock, Rindskopf & Shadish, 1998); hence there are sources of variation (demand characteristics seem likely in some of the reaching across the midline sequential tasks, for example, or in our study 2) which will not be identical from study to study. One limitation of random effects methods, however, is that studies with smaller sample sizes can contribute more to the overall effect estimate, as they contribute more to estimates of between study variability (in fixed effects models smaller variances result in larger weights).…”
Section: Studymentioning
confidence: 97%
“…For studies that included control variables (e.g., baseline physical health, alcohol or drug use), the odds ratios are likewise adjustad---tbey represent the relative odds of survival for religious and nonreligious individuals, controlling for the designated attributes. Odds ratios near 1.0 indicate weak or nonexistent associations between variables, whereas odds ratios greater than 3.0 (or less than 0.33, in the case of negative associations) represent strong associations between variables (Haddock et al, 1998).…”
Section: Computation Of Effect Size Estimatesmentioning
confidence: 99%
“…Of the three indices, the odds ratio is the best one for most situations because of its good statistical properties (Fleiss, 1994;Haddock, Rindskopf, & Shadish, 1998), although risk difference and risk ratio can also be good alternatives under certain conditions (Deeks & Altman, 2001;Hasselblad, Mosteller, et al, 1995;Sánchez-Meca & Marín-Martínez, 2000. In particular, these three indices have been applied in meta-analyses in the health sciences, because in this field it is very common to find research issues in which the outcome is always measured as a dichotomous (or dichotomized) variable.…”
Section: University Of Sevillementioning
confidence: 99%
“…In this case, phi coefficients also underestimate the population correlation coefficient and, therefore, d indices would also underestimate the meta-analytic results (Fleiss, 1994;Haddock et al, 1998). Conversely, some meta-analysts transform ev-1 Whitehead, Bailey, and Elbourne (1999) proposed another strategy consisting of estimating the log odds ratio in each study with continuous measures assuming normal (or log-normal) distributions.…”
mentioning
confidence: 99%