We develop a compound decision theory framework for multiple-testing problems and derive an oracle rule based on the z values that minimizes the false nondiscovery rate (FNR) subject to a constraint on the false discovery rate (FDR). We show that many commonly used multiple-testing procedures, which are p value-based, are inefficient, and propose an adaptive procedure based on the z values. The z value-based adaptive procedure asymptotically attains the performance of the z value oracle procedure and is more efficient than the conventional p value-based methods. We investigate the numerical performance of the adaptive procedure using both simulated and real data. In particular, we demonstrate our method in an analysis of the microarray data from a human immunodeficiency virus study that involves testing a large number of hypotheses simultaneously.
The paper considers the problem of multiple testing under dependence in a compound decision theoretic framework. The observed data are assumed to be generated from an underlying two-state hidden Markov model. We propose oracle and asymptotically optimal data-driven procedures that aim to minimize the false non-discovery rate FNR subject to a constraint on the false discovery rate FDR. It is shown that the performance of a multiple-testing procedure can be substantially improved by adaptively exploiting the dependence structure among hypotheses, and hence conventional FDR procedures that ignore this structural information are inefficient. Both theoretical properties and numerical performances of the procedures proposed are investigated. It is shown that the procedures proposed control FDR at the desired level, enjoy certain optimality properties and are especially powerful in identifying clustered non-null cases. The new procedure is applied to an influenza-like illness surveillance study for detecting the timing of epidemic periods. Copyright (c) 2009 Royal Statistical Society.
The use of propensity scores to adjust for measured confounding factors has become increasingly popular in cohort studies. However, their use in case-control and case-cohort studies has received little attention. The authors present some theory on the estimation and use of propensity scores in case-control and case-cohort studies and present the results of simulation studies that examine whether large-sample expectations are realized in studies of typical size. The application of propensity scores is less straightforward in case-control and case-cohort studies than in cohort studies. The authors' simulations revealed two potentially important issues. First, when using several potential approaches, there is artifactual effect modification of the odds ratio by level of propensity score. The magnitude of this phenomenon decreases as the sample size increases. Second, several potential approaches produce estimated propensity scores that do not converge to the true value as sample size increases and, thus, can fail to adjust fully for measured confounding factors. However, the magnitude of residual confounding appeared modest in our simulations. Researchers considering using propensity scores in case-control or case-cohort studies should consider the potential for artifactual effect modification and their reduced ability to control for potential confounding factors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.