This paper offers lessons from a three-year Test-bed project that tested systematic review practices developed by the Campbell Collaboration and the Cochrane Collaboration. Under the Test-bed project 14 systematic reviews were completed of interventions in crime prevention, social welfare, and education. (References to the products of these test-bed reviews are included in the reference list, preceded by an asterisk). Building on this experience, the authors recommend structuring future reviews around well-defined review topics more explicitly focused on particular interventions, and constraining literature search to evaluations of relevant interventions. Reviewers should analyze and report findings from RCTs separately from non-RCT studies and report on impact estimates in natural units, instead of relying solely on effect size metrics. Further, reviewers should report intent-to-treat estimates as the causally valid outcomes from RCTs. Analyses of impacts for treated sub-groups should be reported as non-experimental findings. More attention should be given to the minimum detectable effect a study can support, as well as any information on the possible costs and benefits of the intervention. Pooling results from studies of disparate interventions, populations, and contexts is not recommended. Meta-analysis should be reserved for homogeneous clusters of interventions studies. Forest plots are helpful for presenting study findings and confidence limits. However, simple bar charts preserve important information on the base levels for the outcomes. Finally reviewers should define a priori the minimum data set or required elements that allow study inclusion, and use this information systematically in making decisions about what evidence to admit into the review.Evidence-based public policy is a popular refrain. The interest in evidence-based policy and practice is, in part, based on concerns that scarce resources be deployed efficiently. The call for evidence-based policies set in motion a ground-swell of interest in reviewing the existing evidence base, both as a means of providing the best available evidence to guide ongoing decision-making, and to aid in prioritizing the near-term research agenda. This paper offers lessons from a three-year effort to test practices for the review and synthesis of evidence on the effectiveness of social policy interventions building on review practices recently applied in health care.