Revisiting the evidence of collapsing boundaries and urgency signals in perceptual decisionmaking Hawkins, G.E.; Forstmann, B.U.; Wagenmakers, E.M.; Ratcliff, R.; Brown, S.D. Published in:The Journal of Neuroscience DOI:10.1523/JNEUROSCI. 2410-14.2015 Link to publication Citation for published version (APA): Hawkins, G. E., Forstmann, B. U., Wagenmakers, E-J., Ratcliff, R., & Brown, S. D. (2015). Revisiting the evidence of collapsing boundaries and urgency signals in perceptual decision-making. The Journal of Neuroscience, 35(6), 2476-2484. https://doi.org/10.1523/JNEUROSCI.2410-14.2015 General rightsIt is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulationsIf you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. For nearly 50 years, the dominant account of decision-making holds that noisy information is accumulated until a fixed threshold is crossed. This account has been tested extensively against behavioral and neurophysiological data for decisions about consumer goods, perceptual stimuli, eyewitness testimony, memories, and dozens of other paradigms, with no systematic misfit between model and data. Recently, the standard model has been challenged by alternative accounts that assume that less evidence is required to trigger a decision as time passes. Such "collapsing boundaries" or "urgency signals" have gained popularity in some theoretical accounts of neurophysiology. Nevertheless, evidence in favor of these models is mixed, with support coming from only a narrow range of decision paradigms compared with a long history of support from dozens of paradigms for the standard theory. We conducted the first large-scale analysis of data from humans and nonhuman primates across three distinct paradigms using powerful model-selection methods to compare evidence for fixed versus collapsing bounds. Overall, we identified evidence in favor of the standard model with fixed decision boundaries. We further found that evidence for static or dynamic response boundaries may depend on specific paradigms or procedures, such as the extent of task practice. We conclude that the difficulty of selecting between collapsing and fixed bounds models has received insufficient attention in previous research, calling into question some previous results.
Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors hinge upon the validity of the models' parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participants' behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler's degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.
Theories of perceptual decision-making have been dominated by the idea that evidence accumulates in favor of di↵erent alternatives until some fixed threshold amount is reached, which triggers a decision. Recent theories have suggested that these thresholds may not be fixed during each decision, but change as time passes. These collapsing thresholds can improve performance in particular decision environments, but reviews of data from typical decision-making paradigms have failed to support collapsing thresholds. We designed three experiments to test collapsing threshold assumptions in decision environments specifically tailored to make them optimal. An emphasis on decision speed encouraged the adoption of collapsing thresholds-most strongly through the use of response deadlines, but also through instruction to a lesser extent-but setting an explicit goal of reward rate optimality through both instructions and task design did not.
For many years the Diffusion Decision Model (DDM) has successfully accounted for behavioral data from a wide range of domains. Important contributors to the DDM's success are the across-trial variability parameters, which allow the model to account for the various shapes of response time distributions encountered in practice. However, several researchers have pointed out that estimating the variability parameters can be a challenging task. Moreover, the numerous fitting methods for the DDM each come with their own associated problems and solutions. This often leaves users in a difficult position. In this collaborative project we invited researchers from the DDM community to apply their various fitting methods to simulated data and provide advice and expert guidance on estimating the DDM's across-trial variability parameters using these methods. Our study establishes a comprehensive reference resource and describes methods that can help to overcome the challenges associated with estimating the DDM's across-trial variability parameters.
The Linear Ballistic Accumulator (LBA) model of Brown and Heathcote (2008) is used as a measurement tool to answer questions about applied psychology. These analyses involve parameter estimation and model selection, and modern approaches use hierarchical Bayesian methods and Markov chain Monte Carlo (MCMC) to estimate the posterior distribution of the parameters. Although there are a range of approaches used for model selection, they are all based on the posterior samples produced via MCMC, which means that the model selection inferences inherit properties of the MCMC sampler. We address these constraints by proposing two new approaches to the Bayesian estimation of the hierarchical LBA model. Both methods are qualitatively different from all existing approaches, and are based on recent advances in particle-based Monte-Carlo methods. The first approach is based on particle MCMC, using Metropolis-within-Gibbs steps and the second approach uses a version of annealed importance sampling. Both methods have important differences from all existing methods, including greatly improved sampling efficiency and parallelisability for high-performance computing. An important further advantage of our annealed importance sampling algorithm is that an estimate of the marginal likelihood is obtained as a byproduct of sampling.This makes it straightforward to then apply model selection via Bayes factors. The new approaches we develop provide opportunities to apply the LBA model with greater confidence than before, and to extend its use to previously intractable cases. We illustrate the proposed methods with pseudo-code, and by application to simulated and real datasets.
We investigate a question relevant to the psychology and neuroscience of perceptual decision-making: whether decisions are based on steadily accumulating evidence, or only on the most recent evidence. We report an empirical comparison between two of the most prominent examples of these theoretical positions, the diffusion model and the urgency-gating model, via model-based qualitative and quantitative comparisons. Our findings support the predictions of the diffusion model over the urgency-gating model, and therefore, the notion that evidence accumulates without much decay. Gross qualitative patterns and fine structural details of the data are inconsistent with the notion that decisions are based only on the most recent evidence. More generally, we discuss some strengths and weaknesses of scientific methods that investigate quantitative models by distilling the formal models to qualitative predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.