We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows complete analytic solutions for choices between any number of alternatives. These solutions (and freely-available computer code) make the model easy to apply to both binary and multiple choice situations. Using data from five previously published experiments, we demonstrate that the LBA model successfully accommodates empirical phenomena from binary and multiple choice tasks that have proven difficult for other theoretical accounts. Our results are encouraging in a field beset by the tradeoff between complexity and completeness.
Response inhibition is essential for navigating everyday life. Its derailment is considered integral to numerous neurological and psychiatric disorders, and more generally, to a wide range of behavioral and health problems. Response-inhibition efficiency furthermore correlates with treatment outcome in some of these conditions. The stop-signal task is an essential tool to determine how quickly response inhibition is implemented. Despite its apparent simplicity, there are many features (ranging from task design to data analysis) that vary across studies in ways that can easily compromise the validity of the obtained results. Our goal is to facilitate a more accurate use of the stop-signal task. To this end, we provide 12 easy-to-implement consensus recommendations and point out the problems that can arise when they are not followed. Furthermore, we provide user-friendly open-source resources intended to inform statistical-power considerations, facilitate the correct implementation of the task, and assist in proper data analysis.
The shape of a response time (RT) distribution can be described by a 3-parameter model consisting of the convolution of the normal and exponential distributions, the ex-Gaussian. Analyses based on mean RT do not take the distributions shape into account and, for that reason, may obscure aspects of performance. To illustrate the point, the ex-Gaussian model was applied to data obtained from a Stroop task. Mean RT revealed strong interference but no facilitation, whereas the analysis based on the ex-Gaussian model showed both interference and facilitation. In short, analyses that do not take the shape of RT distributions into account can mislead and, therefore, should be avoided.
The power function is treated as the law relating response time to practice trials. However, the evidence for a power law is flawed, because it is based on averaged data. We report a survey that assessed the fonn ofthe practice function for individual learners and learning conditions in paradigms that have shaped theories of skill acquisition. We fit power and exponential functions to 40 sets of data representing 7,910 learning series from 475 subjects in 24 experiments. The exponential function fit better than the power function in all the unaveraged data sets. Averaging produced a bias in favor of the power function. A new practice function based on the exponential, the APEX function, fit better than a power function with an extra, preexperimental practice parameter. Clearly, the best candidate for the law of practice is the exponential or APEX function, not the generally accepted power function. The theoretical implications are discussed.Curve fitting without benefit of a model is notoriously a black art.
Decision-makers effortlessly balance the need for urgency against the need for caution. Theoretical and neurophysiological accounts have explained this tradeoff solely in terms of the quantity of evidence required to trigger a decision (the "threshold"). This explanation has also been used as a benchmark test for evaluating new models of decision making, but the explanation itself has not been carefully tested against data. We rigorously test the assumption that emphasizing decision speed versus decision accuracy selectively influences only decision thresholds. In data from a new brightness discrimination experiment we found that emphasizing decision speed over decision accuracy not only decreases the amount of evidence required for a decision but also decreases the quality of information being accumulated during the decision process. This result was consistent for 2 leading decision-making models and in a model-free test. We also found the same model-based results in archival data from a lexical decision task (reported by Wagenmakers, Ratcliff, Gomez, & McKoon, 2008) and new data from a recognition memory task. We discuss discuss implications for theoretical development and applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.