Abstract:Choosing between competing models lies at the heart of scientific work, and is a frequent motivation for experimentation. Optimal experimental design (OD) methods maximize the benefit of experiments towards a specified goal. We advance and demonstrate an OD approach to maximize the information gained towards model selection. We make use of so-called model choice indicators, which are random variables with an expected value equal to Bayesian model weights. Their uncertainty can be measured with Shannon entropy. Since the experimental data are still random variables in the planning phase of an experiment, we use mutual information (the expected reduction in Shannon entropy) to quantify the information gained from a proposed experimental design. For implementation, we use the Preposterior Data Impact Assessor framework (PreDIA), because it is free of the lower-order approximations of mutual information often found in the geosciences. In comparison to other studies in statistics, our framework is not restricted to sequential design or to discrete-valued data, and it can handle measurement errors. As an application example, we optimize an experiment about the transport of contaminants in clay, featuring the problem of choosing between competing isotherms to describe sorption. We compare the results of optimizing towards maximum model discrimination with an alternative OD approach that minimizes the overall predictive uncertainty under model choice uncertainty.
A variety of empirical formulas to predict river bed evolution with hydro-morphodynamic river models exists. Modelers lack objective guidance of how to select the most appropriate one for a specific application. Such guidance can be provided by Bayesian model selection (BMS). Its applicability is however limited by high computational costs. To transfer it to computationally expensive river modeling tasks, we propose to combine BMS with model reduction based on arbitrary Polynomial Chaos Expansion. To account for approximation errors in the reduced models, we introduce a novel correction factor that yields a reliable model ranking even under strong computational time constraints. We demonstrate our proposed approach on a case study for a 10-km stretch of the lower Rhine river. The correction factor may shield us from misleading model ranking results. In our case, the correction factor was shown to increase the confidence in model selection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.