The incorporation of causal inference in mediation analysis has led to theoretical and methodological advancements—effect definitions with causal interpretation, clarification of assumptions required for effect identification, and an expanding array of options for effect estimation. However, the literature on these results is fast-growing and complex, which may be confusing to researchers unfamiliar with causal inference or unfamiliar with mediation. The goal of this article is to help ease the understanding and adoption of causal mediation analysis. It starts by highlighting a key difference between the causal inference and traditional approaches to mediation analysis and making a case for the need for explicit causal thinking and the causal inference approach in mediation analysis. It then explains in as-plain-as-possible language existing effect types, paying special attention to motivating these effects with different types of research questions, and using concrete examples for illustration. This presentation differentiates 2 perspectives (or purposes of analysis): the explanatory perspective (aiming to explain the total effect) and the interventional perspective (asking questions about hypothetical interventions on the exposure and mediator, or hypothetically modified exposures). For the latter perspective, the article proposes tapping into a general class of interventional effects that contains as special cases most of the usual effect types—interventional direct and indirect effects, controlled direct effects and also a generalized interventional direct effect type, as well as the total effect and overall effect. This general class allows flexible effect definitions which better match many research questions than the standard interventional direct and indirect effects.
Purpose of review
Propensity score methods have become commonplace in pharmacoepidemiology over the past decade. Their adoption has confronted formidable obstacles that arise from pharmacoepidemiology's reliance on large healthcare databases of considerable heterogeneity and complexity. These include identifying clinically meaningful samples, defining treatment comparisons, and measuring covariates in ways that respect sound epidemiologic study design. Additional complexities involve correctly modeling treatment decisions in the face of variation in healthcare practice, and dealing with missing information and unmeasured confounding. In this review, we examine the application of propensity score methods in pharmacoepidemiology with particular attention to these and other issues, with an eye towards standards of practice, recent methodological advances, and opportunities for future progress.
Recent findings
Propensity score methods have matured in ways that can advance comparative effectiveness and safety research in pharmacoepidemiology. These include natural extensions for categorical treatments, matching algorithms that can optimize sample size given design constraints, weighting estimators that asymptotically target matched and overlap samples, and the incorporation of machine learning to aid in covariate selection and model building.
Summary
These recent and encouraging advances should be further evaluated through simulation and empirical studies, but nonetheless represent a bright path ahead for the observational study of treatment benefits and harms.
BackgroundRandomized controlled trials are often used to inform policy and practice for broad populations. The average treatment effect (ATE) for a target population, however, may be different from the ATE observed in a trial if there are effect modifiers whose distribution in the target population is different that from that in the trial. Methods exist to use trial data to estimate the target population ATE, provided the distributions of treatment effect modifiers are observed in both the trial and target population—an assumption that may not hold in practice.MethodsThe proposed sensitivity analyses address the situation where a treatment effect modifier is observed in the trial but not the target population. These methods are based on an outcome model or the combination of such a model and weighting adjustment for observed differences between the trial sample and target population. They accommodate several types of outcome models: linear models (including single time outcome and pre- and post-treatment outcomes) for additive effects, and models with log or logit link for multiplicative effects. We clarify the methods’ assumptions and provide detailed implementation instructions.IllustrationWe illustrate the methods using an example generalizing the effects of an HIV treatment regimen from a randomized trial to a relevant target population.ConclusionThese methods allow researchers and decision-makers to have more appropriate confidence when drawing conclusions about target population effects.
Randomized trials are considered the gold standard for assessing the causal effects of a drug or intervention in a study population, and their results are often utilized in the formulation of health policy. However, there is growing concern that results from trials do not necessarily generalize well to their respective target populations, in which policies are enacted, due to substantial demographic differences between study and target populations. In trials related to substance use disorders (SUDs), especially, strict exclusion criteria make it challenging to obtain study samples that are fully "representative" of the populations that policymakers may wish to generalize their results to. In this paper, we provide an overview of post-trial statistical methods for assessing and improving upon the generalizability of a randomized trial to a well-defined target population. We then illustrate the different methods using a randomized trial related to methamphetamine dependence and a target population of substance abuse treatment seekers, and provide software to implement the methods in R using the "generalize" package. We discuss several practical considerations for researchers who wish to utilize these tools, such as the importance of acquiring population-level data to represent the target population of interest, and the challenges of data harmonization.
Evidence‐based policy at the local level requires predicting the impact of an intervention to inform whether it should be adopted. Increasingly, local policymakers have access to published research evaluating the effectiveness of policy interventions from national research clearinghouses that review and disseminate evidence from program evaluations. Through these evaluations, local policymakers have a wealth of evidence describing what works, but not necessarily where. Multisite evaluations may produce unbiased estimates of the average impact of an intervention in the study sample and still produce inaccurate predictions of the impact for localities outside the sample for two reasons: (1) the impact of the intervention may vary across localities, and (2) the evaluation estimate is subject to sampling error. Unfortunately, there is relatively little evidence on how much the impacts of policy interventions vary from one locality to another and almost no evidence on the implications of this variation for the accuracy with which the local impact of adopting an intervention can be predicted using findings from an evaluation in other localities. In this paper, we present a set of methods for quantifying the accuracy of the local predictions that can be obtained using the results of multisite randomized trials and for assessing the likelihood that prediction errors will lead to errors in local policy decisions. We demonstrate these methods using three evaluations of educational interventions, providing the first empirical evidence of the ability to use multisite evaluations to predict impacts in individual localities—i.e., the ability of “evidence‐based policy” to improve local policy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.