Abstract:Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossibl… Show more
“…However, there is very little coverage of the topic in the behavioral sciences, as evidenced by the small number of articles in psychological journals on choosing the best values along a continuum in the pursuit of either of these modeling goals. Although Myung, Pitt, and colleagues (Cavagnaro, Myung, Pitt, & Kujala, 2010;Cavagnaro, Pitt, & Myung, 2011;Myung & Pitt, 2009) recently have been actively developing methods of choosing a set of IV values in order to improve selection among known models, there are only a handful of other psychology publications involving general design approaches to optimal parameter estimation (Berger, 1994;Berger, King, & Wong, 2000;Passos & Berger, 2004;Vermeulen, Goos, & Vandebroek, 2008), with each receiving no more than a handful of citations (ranging from 0 to 8 in the Social Science Citation Index). One exception is the common use of adaptive methods in the area of psychophysics, where stimulus levels are dynamically chosen on the basis of unfolding performance in order to estimate the threshold or slope of an ogival psychometric function (Leek, 2001).…”
Section: Theoretical Issues In Model Selection and Parameter Estimationmentioning
confidence: 99%
“…In the absence of extant models, a rich sampling technique may help map out the general shape of the relationship among the IVs and DVs and do so with sufficient accuracy of parameter estimates to provide utility. As models become more formalized later in the development of a scientific subdomain, there will need to be a greater emphasis on the types of formal value selection methods discussed elsewhere (Cavagnaro et al, 2010;Cavagnaro et al, 2011;Myung & Pitt, 2009). But the field may be slow to reach that stage if insufficient sampling of stimulus dimensions persists in the field of experimental psychology.…”
Section: Theoretical Issues In Model Selection and Parameter Estimationmentioning
The choice of stimulus values to test in any experiment is a critical component of good experimental design. This study examines the consequences of random and systematic sampling of data values for the identification of functional relationships in experimental settings. Using Monte Carlo simulation, uniform random sampling was compared with systematic sampling of two, three, four, or N equally spaced values along a single stimulus dimension. Selection of the correct generating function (a logistic or a linear model) was improved with each increase in the number of levels sampled, with N equally spaced values and random stimulus sampling performing similarly. These improvements came at a small cost in the precision of the parameter estimates for the generating function.
“…However, there is very little coverage of the topic in the behavioral sciences, as evidenced by the small number of articles in psychological journals on choosing the best values along a continuum in the pursuit of either of these modeling goals. Although Myung, Pitt, and colleagues (Cavagnaro, Myung, Pitt, & Kujala, 2010;Cavagnaro, Pitt, & Myung, 2011;Myung & Pitt, 2009) recently have been actively developing methods of choosing a set of IV values in order to improve selection among known models, there are only a handful of other psychology publications involving general design approaches to optimal parameter estimation (Berger, 1994;Berger, King, & Wong, 2000;Passos & Berger, 2004;Vermeulen, Goos, & Vandebroek, 2008), with each receiving no more than a handful of citations (ranging from 0 to 8 in the Social Science Citation Index). One exception is the common use of adaptive methods in the area of psychophysics, where stimulus levels are dynamically chosen on the basis of unfolding performance in order to estimate the threshold or slope of an ogival psychometric function (Leek, 2001).…”
Section: Theoretical Issues In Model Selection and Parameter Estimationmentioning
confidence: 99%
“…In the absence of extant models, a rich sampling technique may help map out the general shape of the relationship among the IVs and DVs and do so with sufficient accuracy of parameter estimates to provide utility. As models become more formalized later in the development of a scientific subdomain, there will need to be a greater emphasis on the types of formal value selection methods discussed elsewhere (Cavagnaro et al, 2010;Cavagnaro et al, 2011;Myung & Pitt, 2009). But the field may be slow to reach that stage if insufficient sampling of stimulus dimensions persists in the field of experimental psychology.…”
Section: Theoretical Issues In Model Selection and Parameter Estimationmentioning
The choice of stimulus values to test in any experiment is a critical component of good experimental design. This study examines the consequences of random and systematic sampling of data values for the identification of functional relationships in experimental settings. Using Monte Carlo simulation, uniform random sampling was compared with systematic sampling of two, three, four, or N equally spaced values along a single stimulus dimension. Selection of the correct generating function (a logistic or a linear model) was improved with each increase in the number of levels sampled, with N equally spaced values and random stimulus sampling performing similarly. These improvements came at a small cost in the precision of the parameter estimates for the generating function.
“…Our proposed framework thus combines several advantageous properties of previous approaches: (1) It builds on the rigorous and consistent formulation of entropy-based OD for model choice as used in Cavagnaro et al [58] and Drovandi et al [59]. (2) For geoscientists, this establishes the link between optimal design and the mentality to view models as competing hypotheses.…”
Section: Introductionmentioning
confidence: 99%
“…Several authors have suggested the use of mutual information to measure the impact of potential future data on model discrimination (e.g., [57][58][59]). While Box and Hill [57] used a lower-order approximation of mutual information for the Box-Hill discrimination function, the recent approaches by Cavagnaro et al [58] and Drovandi et al [59] use a sample-based representation of the involved joint distributions. However, their approaches are limited to sequential design problems.…”
Abstract:Choosing between competing models lies at the heart of scientific work, and is a frequent motivation for experimentation. Optimal experimental design (OD) methods maximize the benefit of experiments towards a specified goal. We advance and demonstrate an OD approach to maximize the information gained towards model selection. We make use of so-called model choice indicators, which are random variables with an expected value equal to Bayesian model weights. Their uncertainty can be measured with Shannon entropy. Since the experimental data are still random variables in the planning phase of an experiment, we use mutual information (the expected reduction in Shannon entropy) to quantify the information gained from a proposed experimental design. For implementation, we use the Preposterior Data Impact Assessor framework (PreDIA), because it is free of the lower-order approximations of mutual information often found in the geosciences. In comparison to other studies in statistics, our framework is not restricted to sequential design or to discrete-valued data, and it can handle measurement errors. As an application example, we optimize an experiment about the transport of contaminants in clay, featuring the problem of choosing between competing isotherms to describe sorption. We compare the results of optimizing towards maximum model discrimination with an alternative OD approach that minimizes the overall predictive uncertainty under model choice uncertainty.
“…Using MI for the selection of maximally informative experiments has been advocated by several recent lines of research, for example, in experimental psychology [5,19], computational neuroscience [22,21,15], and quantum physics [9]. An alternative approach is to maximize the expected Fisher information of the experiment as in [10].…”
Abstract. We consider the optimal design problem for the Ornstein-Uhlenbeck process with fixed threshold, commonly used to describe a leaky, noisy integrate-and-fire neuron. We present a solution to the problem of devising the best external time-dependent perturbation to the process in order to facilitate the estimation of the characteristic time parameter for this process. The optimal design problem is constrained here by the fact that only the times between threshold crossings from below, known as hitting times, are observable. The optimal control is based on a maximization of the mutual information between the posterior of the unknown parameter given these observations and the distribution of the hitting times. Our approach is based on the adjoint method for computing the gradient of a functional of a solution to a Fokker-Planck partial differential equation with respect to an input function (i.e., to the control). Our method also enables the estimation of other parameters, in the case when more than one parameter is unknown.Key words. design of experiments, mutual information, leaky integrate-and-fire neuronal models, FokkerPlanck equation, probability density function control method, Ornstein-Uhlenbeck process AMS subject classifications. 62K05, 62L05, 93E20, 92C20, 92D25, 49L20, 49M25
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.