An initial screening experiment may lead to ambiguous conclusions regarding the factors which are active in explaining the variation of an outcome variable: thus adding follow-up runs becomes necessary. We propose a fully Bayes objective approach to follow-up designs, using prior distributions suitably tailored to model selection. We adopt a model criterion based on a weighted average of Kullback-Leibler divergences between predictive distributions for all possible pairs of models. When applied to real data, our method produces results which compare favorably to previous analyses based 1 on subjective weakly informative priors. Supplementary materials are available online.
We consider a bivariate logistic model for a binary response and we assume that two rival dependence structures are possible. Copula functions are very useful tools to model different kinds of dependence with arbitrary marginal distributions. We consider Clayton and Gumbel copulae as competing association models. The focus is on applications in testing a new drug looking at both efficacy and toxicity outcomes. In this context, one of the main goals is to find the dose which maximizes the probability of efficacy without toxicity, herein called P-optimal dose. If the P-optimal dose changes under the two rival copulae, then it is relevant to identify the proper association model. To this aim, we propose a criterion (called PKL-) which enables us to find the optimal doses to discriminate between the rival copulae, subject to a constraint that protects patients against dangerous doses. Furthermore, by applying the likelihood ratio test for non-nested models, via a simulation study we confirm that the PKL-optimal design is really able to discriminate between the rival copulae.
Big Data are huge amounts of digital information that rarely result from properly planned surveys; as a consequence they often contain redundant observations. When the aim is to answer particular questions of interest, we suggest selecting a subsample of units that contains the majority of the information to achieve this goal. Selection methods driven by the theory of optimal design incorporate the inferential purposes and thus perform better than standard sampling schemes.
Measurement systems capability analysis aims to test if the variability of a measurement system is small relative to the variability of a monitored process. At present some open questions are related both to the interpretation of the critical values of the indices typically used by practitioners to assess the capability of a gauge and to the choice of the size of the experimental design to test the repeatability and the reproducibility of the measurement process. In this paper, starting from the misclassification rates of a measurement system, we present a solution to these issues.
MISCLASSIFICATION RATES, CRITICAL VALUES AND SIZE OF THE DESIGN
611Even with these limits, the strength of our proposal is to have shed lights on how to set limits (may be conservative) to those many GRR indexes used by practitioners and what are the relationships among them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.