Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this paper, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information, and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond.
Numerous empirical studies have examined the question of whether transitivity of preference is a viable axiom of human decision making, but they arrive at different conclusions depending on how they model choice variability. To bring some consistency to these seemingly conflicting results from the literature, this article moves beyond the binary question of whether or not transitivity holds, asking instead: In what way does transitivity hold (or not hold) stochastically, and how robust is (in)transitive preference at the individual level? To answer these questions, we reanalyze data from 7 past experiments, using Bayesian model selection to place the major models of stochastic (in)transitivity in direct competition, and also carry out a new experiment examining transitivity under direct time pressure constraints. We find that a majority of individuals satisfy transitivity, but according to different stochastic specifications (i.e., models of choice variability), and that individuals are largely stable in their transitivity "types" across decision making environments. Thus, transitivity of preference, as well as the particular type of individual choice variability associated with it, appear to be robust properties at the individual level.
The tendency to discount the value of future rewards has become one of
the best-studied constructs in the behavioral sciences. Although hyperbolic
discounting remains the dominant quantitative characterization of this
phenomenon, a variety of models have been proposed and consensus around the one
that most accurately describes behavior has been elusive. To help bring some
clarity to this issue, we propose an Adaptive Design Optimization (ADO) method
for fitting and comparing models of temporal discounting. We then conduct an ADO
experiment aimed at discriminating among six popular models of temporal
discounting. Rather than supporting a single underlying model, our results show
that each model is inadequate in some way to describe the full range of behavior
exhibited across subjects. The precision of results provided by ADO further
identify specific properties of models, such as accommodating both increasing
and decreasing impatience, that are mandatory to describe temporal discounting
broadly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.