When permutation methods are used in practice, often a limited number of random permutations are used to decrease the computational burden. However, most theoretical literature assumes that the whole permutation group is used, and methods based on random permutations tend to be seen as approximate. There exists a very limited amount of literature on exact testing with random permutations, and only recently a thorough proof of exactness was given. In this paper, we provide an alternative proof, viewing the test as a "conditional Monte Carlo test" as it has been called in the literature. We also provide extensions of the result. Importantly, our results can be used to prove properties of various multiple testing procedures based on random permutations.
When multiple hypotheses are tested, interest is often in ensuring that the proportion of false discoveries (FDP) is small with high confidence. In this paper, confidence upper bounds for the FDP are constructed, which are simultaneous over all rejection cut-offs. In particular this allows the user to select a set of hypotheses post hoc such that the FDP lies below some constant with high confidence. Our method uses permutations to account for the dependence structure in the data. So far only Meinshausen provided an exact, permutationbased and computationally feasible method for simultaneous FDP bounds. We provide an exact method, which uniformly improves this procedure. Further, we provide a generalization of this method. It lets the user select the shape of the simultaneous confidence bounds. This gives the user more freedom in determining the power properties of the method. Interestingly, several existing permutation methods, such as Significance Analysis of Microarrays (SAM) and Westfall and Young's maxT method, are obtained as special cases.
Summary. Significance analysis of microarrays (SAM) is a highly popular permutation-based multiple-testing method that estimates the false discovery proportion (FDP): the fraction of false positive results among all rejected hypotheses. Perhaps surprisingly, until now this method had no known properties. This paper extends SAM by providing 1 α upper confidence bounds for the FDP, so that exact confidence statements can be made. As a special case, an estimate of the FDP is obtained that underestimates the FDP with probability at most 0:5. Moreover, using a closed testing procedure, this paper decreases the upper bounds and estimates in such a way that the confidence level is maintained. We base our methods on a general result on exact testing with random permutations.
Summary
Generalized linear models are often misspecified because of overdispersion, heteroscedasticity and ignored nuisance variables. Existing quasi‐likelihood methods for testing in misspecified models often do not provide satisfactory type I error rate control. We provide a novel semiparametric test, based on sign flipping individual score contributions. The parameter tested is allowed to be multi‐dimensional and even high dimensional. Our test is often robust against the mentioned forms of misspecification and provides better type I error control than its competitors. When nuisance parameters are estimated, our basic test becomes conservative. We show how to take nuisance estimation into account to obtain an asymptotically exact test. Our proposed test is asymptotically equivalent to its parametric counterpart.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.