For better or worse, humans live a resource constrained existence; only a fraction of the sensations our body experiences ever reach conscious awareness, and we store a shockingly small subset of these experiences in short-term memory for later use. Despite these observations, most theories of learning assume that, given feedback about a new experience, knowledge is updated so as to minimize subsequent errors with minimal consideration of cognitive capacity constraints. Acknowledging that human cognition has clear biological limitations, we explored the degree to which human learning could be better described with sets of biases toward simpler and more parsimonious mental representations (i.e., simplicity biases) relative to an error-driven, accuracy-maximizing normative model. Taking the normative model as a basis, we developed a suite of nested computational models that use various mechanistic simplicity biases to explain learning. We fit these models to four data sets that varied in the type of learning needed to achieve high accuracy. Across all data sets, we found consistent evidence that the best descriptors of human learning were models with mechanisms that instantiated a constrained optimization process, where errors were minimized subject to constraints on both attention and memory. Importantly, whereas normative models failed to account for patterns of attentional deployment over time, models with simplicity biases accounted well for both choice responses and fixation data as participants learned various categorization tasks.
Two fundamental difficulties when learning novel categories are deciding (a) what information is relevant and (b) when to use that information. Although previous theories have specified how observers learn to attend to relevant dimensions over time, those theories have largely remained silent about how attention should be allocated on a within-trial basis, which dimensions of information should be sampled, and how the temporal order of information sampling influences learning. Here, we use the adaptive attention representation model (AARM) to demonstrate that a common set of mechanisms can be used to specify: (a) How the distribution of attention is updated between trials over the course of learning and (b) how attention dynamically shifts among dimensions within a trial. We validate our proposed set of mechanisms by comparing AARM’s predictions to observed behavior in four case studies, which collectively encompass different theoretical aspects of selective attention. We use both eye-tracking and choice response data to provide a stringent test of how attention and decision processes dynamically interact during category learning. Specifically, how does attention to selected stimulus dimensions gives rise to decision dynamics, and in turn, how do decision dynamics influence which dimensions are attended to via gaze fixations?
Trait impulsivity—defined by strong preference for immediate over delayed rewards and difficulties inhibiting prepotent behaviors—is observed in all externalizing disorders, including substance-use disorders. Many laboratory tasks have been developed to identify decision-making mechanisms and correlates of impulsive behavior, but convergence between task measures and self-reports of impulsivity are consistently low. Long-standing theories of personality and decision-making predict that neurally mediated individual differences in sensitivity to (a) reward cues and (b) punishment cues (frustrative nonreward) interact to affect behavior. Such interactions obscure one-to-one correspondences between single personality traits and task performance. We used hierarchical Bayesian analysis in three samples with differing levels of substance use ( N = 967) to identify interactive dependencies between trait impulsivity and state anxiety on impulsive decision-making. Our findings reveal how anxiety modulates impulsive decision-making and demonstrate benefits of hierarchical Bayesian analysis over traditional approaches for testing theories of psychopathology spanning levels of analysis.
Context effects are phenomena of multiattribute, multialternative decision-making that contradict normative models of preference. Numerous computational models have been created to explain these effects, communicated through the estimation of model parameters. Historically, parameters have been estimated by fitting these models to choice response data alone. In other contexts, such as those conventionally studied in perceptual decision-making, the times associated with choice responses have proven effective in improving understanding and testing competing theoretical accounts of various experimental manipulations. Here, we explore the advantages of incorporating response time distributions into the inference procedure, using the most recent model of context effects-the multiattribute linear ballistic accumulator (MLBA) model-as a case study. First, we establish in a simulation study that incorporating response time data in the inference procedure does indeed produce more constrained estimates of the model parameters, and the extent of this constraint is modulated by the number of observations within the data. Second, we generalize our results beyond the MLBA model by using likelihood-free techniques to estimate model parameters. Finally, we investigate parameter differences when choice or choice response time data are used to fit the MLBA model by fitting different model variants to real data from a perceptual decision-making experiment with context effects. Based on likelihood-free and likelihood-based estimations of both simulated and real data, we conclude that response time measures offer an important source of constraint for models of context effects.
The link between mind, brain, and behavior has mystified philosophers and scientists for millennia. Recent progress has been made by forming statistical associations between manifest variables of the brain (e.g., electroencephalogram [EEG], functional MRI [fMRI]) and manifest variables of behavior (e.g., response times, accuracy) through hierarchical latent variable models. Within this framework, one can make inferences about the mind in a statistically principled way, such that complex patterns of brain–behavior associations drive the inference procedure. However, previous approaches were limited in the flexibility of the linking function, which has proved prohibitive for understanding the complex dynamics exhibited by the brain. In this article, we propose a data-driven, nonparametric approach that allows complex linking functions to emerge from fitting a hierarchical latent representation of the mind to multivariate, multimodal data. Furthermore, to enforce biological plausibility, we impose both spatial and temporal structure so that the types of realizable system dynamics are constrained. To illustrate the benefits of our approach, we investigate the model’s performance in a simulation study and apply it to experimental data. In the simulation study, we verify that the model can be accurately fitted to simulated data, and latent dynamics can be well recovered. In an experimental application, we simultaneously fit the model to fMRI and behavioral data from a continuous motion tracking task. We show that the model accurately recovers both neural and behavioral data and reveals interesting latent cognitive dynamics, the topology of which can be contrasted with several aspects of the experiment.
The link between mind, brain, and behavior has mystified philosophers and scientists for millennia. Recent progress has been made by forming statistical associations between manifest variables of the brain (e.g., EEG, fMRI) and manifest variables of behavior (e.g., response times, accuracy) through hierarchical latent variable models (Turner, Forstmann, & Steyvers, 2019). Within this framework, one can make inferences about the mind in a statistically principled way, such that complex patterns of brain-behavior associations drive the inference procedure. However, previous approaches were limited in the flexibility of the linking function, which has proven prohibitive for understanding the complex dynamics exhibited by the brain. In this article, we propose a data-driven, non-parametric approach that allows complex linking functions to emerge from fitting a hierarchical latent representation of the mind to multivariate, multimodal data. Furthermore, to enforce biological plausibility, we impose both spatial and temporal structure so that the types of realizable system dynamics are constrained. To illustrate the benefits of our approach, we investigate the model’s performance in a simulation study and apply it to experimental data. In the simulation study, we verify that the model can be accurately fit to simulated data, and latent dynamics can be well recovered. In an experimental application, we simultaneously fit the model to fMRI and behavioral data from a continuous motion tracking task. We show that the model accurately recovers both neural and behavioral data, and reveals interesting latent cognitive dynamics. Finally, we provide a test of the model’s generalizability by assessing its predictive accuracy in a cross-validation test.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.