Non-causal associations between exposures and outcomes are a threat to validity of causal inference in observational studies. Many techniques have been developed for study design and analysis to identify and eliminate such errors. Such problems are not expected to compromise experimental studies, where careful standardization of conditions (for laboratory work) and randomization (for population studies) should, if applied properly, eliminate most such non-causal associations. We argue, however, that a routine precaution taken in the design of biological laboratory experiments—the use of “negative controls”—is designed to detect both suspected and unsuspected sources of spurious causal inference. In epidemiology, analogous negative controls help to identify and resolve confounding as well as other sources of error, including recall bias or analytic flaws. We distinguish two types of negative controls (exposure controls and outcome controls), describe examples of each type from the epidemiologic literature, and identify the conditions for the use of such negative controls to detect confounding. We conclude that negative controls should be more commonly employed in observational studies, and that additional work is needed to specify the conditions under which negative controls will be sensitive detectors of other sources of error in observational studies.
We give critical attention to the assumptions underlying Mendelian randomization analysis and their biological plausibility. Several scenarios violating the Mendelian randomization assumptions are described, including settings with inadequate phenotype definition, the setting of time-varying exposures, the presence of gene-environment interaction, the existence of measurement error, the possibility of reverse causation, and the presence of linkage disequilibrium. Data analysis examples are given illustrating that the inappropriate use of instrumental variable techniques when the Mendelian randomization assumptions are violated can lead to biases of enormous magnitude. To help address some of the strong assumptions being made, three possible approaches are suggested. First, the original proposal of Katan (Lancet. 1986; 1:507-508) for Mendelian randomization was not to use instrumental variable techniques to obtain estimates, but merely to examine genotype-outcome associations to test for the presence of an effect of the exposure on the outcome. We show that this more modest goal and approach can circumvent many, though not all of, the potential biases described. Second, we discuss the use of sensitivity analysis in evaluating the consequences of violations in the assumptions and attempts to correct for those violations. Third, we suggest that a focus on negative, rather than positive, Mendelian randomization results may turn out to be more reliable.
Influenza viruses undergo frequent antigenic changes. As a result, the viruses circulating change within and between seasons, and the composition of the influenza vaccine is updated annually. Thus, estimation of the vaccine's effectiveness is not constant across seasons. In order to provide annual estimates of the influenza vaccine's effectiveness, health departments have increasingly adopted the "test-negative design," using enhanced data from routine surveillance systems. In this design, patients presenting to participating general practitioners with influenza-like illness are swabbed for laboratory testing; those testing positive for influenza virus are defined as cases, and those testing negative form the comparison group. Data on patients' vaccination histories and confounder profiles are also collected. Vaccine effectiveness is estimated from the odds ratio comparing the odds of testing positive for influenza among vaccinated patients and unvaccinated patients, adjusting for confounders. The test-negative design is purported to reduce bias associated with confounding by health-care-seeking behavior and misclassification of cases. In this paper, we use directed acyclic graphs to characterize potential biases in studies of influenza vaccine effectiveness using the test-negative design. We show how studies using this design can avoid or minimize bias and where bias may be introduced with particular study design variations.
Whilst estimation of the marginal (total) causal effect of a point exposure on an outcome is arguably the most common objective of experimental and observational studies in the health and social sciences, in recent years, investigators have also become increasingly interested in mediation analysis. Specifically, upon evaluating the total effect of the exposure, investigators routinely wish to make inferences about the direct or indirect pathways of the effect of the exposure not through or through a mediator variable that occurs subsequently to the exposure and prior to the outcome. Although powerful semiparametric methodologies have been developed to analyze observational studies, that produce double robust and highly efficient estimates of the marginal total causal effect, similar methods for mediation analysis are currently lacking. Thus, this paper develops a general semiparametric framework for obtaining inferences about so-called marginal natural direct and indirect causal effects, while appropriately accounting for a large number of pre-exposure confounding factors for the exposure and the mediator variables. Our analytic framework is particularly appealing, because it gives new insights on issues of efficiency and robustness in the context of mediation analysis. In particular, we propose new multiply robust locally efficient estimators of the marginal natural indirect and direct causal effects, and develop a novel double robust sensitivity analysis framework for the assumption of ignorability of the mediator variable.
As with other instrumental variable (IV) analyses, Mendelian randomization (MR) studies rest on strong assumptions. These assumptions are not routinely systematically evaluated in MR applications, although such evaluation could add to the credibility of MR analyses. In this article, the authors present several methods that are useful for evaluating the validity of an MR study. They apply these methods to a recent MR study that used fat mass and obesity-associated (FTO) genotype as an IV to estimate the effect of obesity on mental disorder. These approaches to evaluating assumptions for valid IV analyses are not fail-safe, in that there are situations where the approaches might either fail to identify a biased IV or inappropriately suggest that a valid IV is biased. Therefore, the authors describe the assumptions upon which the IV assessments rely. The methods they describe are relevant to any IV analysis, regardless of whether it is based on a genetic IV or other possible sources of exogenous variation. Methods that assess the IV assumptions are generally not conclusive, but routinely applying such methods is nonetheless likely to improve the scientific contributions of MR studies.
We consider a causal effect that is confounded by an unobserved variable, but with observed proxy variables of the confounder. We show that, with at least two independent proxy variables satisfying a certain rank condition, the causal effect is nonparametrically identified, even if the measurement error mechanism, i.e., the conditional distribution of the proxies given the confounder, may not be identified. Our result generalizes the identification strategy of Kuroki & Pearl (2014) that rests on identification of the measurement error mechanism. When only one proxy for the confounder is available, or the required rank condition is not met, we develop a strategy to test the null hypothesis of no causal effect.
In failure‐time settings, a competing event is any event that makes it impossible for the event of interest to occur. For example, cardiovascular disease death is a competing event for prostate cancer death because an individual cannot die of prostate cancer once he has died of cardiovascular disease. Various statistical estimands have been defined as possible targets of inference in the classical competing risks literature. Many reviews have described these statistical estimands and their estimating procedures with recommendations about their use. However, this previous work has not used a formal framework for characterizing causal effects and their identifying conditions, which makes it difficult to interpret effect estimates and assess recommendations regarding analytic choices. Here we use a counterfactual framework to explicitly define each of these classical estimands. We clarify that, depending on whether competing events are defined as censoring events, contrasts of risks can define a total effect of the treatment on the event of interest or a direct effect of the treatment on the event of interest not mediated by the competing event. In contrast, regardless of whether competing events are defined as censoring events, counterfactual hazard contrasts cannot generally be interpreted as causal effects. We illustrate how identifying assumptions for all of these counterfactual estimands can be represented in causal diagrams, in which competing events are depicted as time‐varying covariates. We present an application of these ideas to data from a randomized trial designed to estimate the effect of estrogen therapy on prostate cancer mortality.
Robins et al, 2008, published a theory of higher order influence functions for inference in semi-and non-parametric models. This paper is a comprehensive manuscript from which Robins et al, was drawn. The current paper includes many results and proofs that were not included in Robins et al due to space limitation. Particular results
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.