In describing the impact of an intervention, a single effect size, odds ratio, or other summary measure is often employed. This single measure is useful in calibrating the effect of one intervention against others, but it is less meaningful when the intervention displays variation in impact. A single intervention trial can show differential effects when subgroups respond differentially, when impact varies by environmental context, or when there is varying impact with different outcome measures or across follow-up time. This article presents a multilevel mixture modeling approach for metaanalyses that summarizes these sources of impact variation across trials and measured outcomes.Keywords meta-analysis; effect sizes; multilevel modeling; mixture modeling Effect sizes (ESs) and related scale-invariant measures, such as odds ratios, risk differences, and hazard ratios, are often used to calibrate the strength of an intervention's impact in a single experimental trial. By combining ESs from an identical or similar interventions tested across different trials conducted as separate studies, it is possible to obtain a single measure of impact representing overall effect across different trial conditions (Hedges, 1982; Hedges & Olkin, 1985;Lipsey & Wilson, 2001). Different interventions that are tested in different trials can be ranked based on the magnitude of their ESs. This approach allows policy makers to select between competing strategies even if these interventions have never been compared head to head in a trial. One example of the use of this approach has been the comparison of prevention versus service programs to reduce delinquency. Such comparison reveals that, in terms of reduction in criminality, prevention benefits exceed those of incarceration (Greenwood, Model, Rydell, & Chiesa, 1996). Finally, the ESs from a broad array of related trials can be treated as a dependent variable in a regression model to identify common factors in interventions predicting higher impact. This analytical model led Tobler et al. (2000) to conclude that interactive curricula led to lower drug use than did lecturing.Traditional meta-analyses of intervention impact use a single mean to assess the impact of a single intervention across replicated trials (Petrosino, Boruch, Rounding, McDonald, & Chalmers, 2000; The Cochrane Collaboration, 2007). Many of the well-known meta-analyses -such as those of 177 mental health prevention programs for youth (Durlak & Wells, 1997), 207 school-based drug prevention programs (Tobler et al., 2000), 221 delinquency prevention et al., 2003a, Wilson, Lipsey, & Soydan, 2003b, and 69 depression prevention programs (Horowitz & Garber, 2006;Jané-Llopis, Hosman, Jenkins, & Anderson, 2003)-all chose a single ES for each trial. However, in all these meta-analyses, there were typically 4-10 distinct, relevant outcome analyses for each trial corresponding to multiple outcome variables, multiple follow-up times, and analyses involving distinct subgroups (Brown, Berndt, Brinales, Zong, & Bhagwat, 2...