Our experience of the world seems to divide naturally into discrete, temporally extended events, yet the mechanisms underlying the learning and identification of events are poorly understood. Research on event perception has focused on transient elevations in predictive uncertainty or surprise as the primary signal driving event segmentation. We present human behavioral and functional magnetic resonance imaging (fMRI) evidence in favor of a different account, in which event representations coalesce around clusters or ‘communities’ of mutually predicting stimuli. Through parsing behavior, fMRI adaptation and multivoxel pattern analysis, we demonstrate the emergence of event representations in a domain containing such community structure, but in which transition probabilities (the basis of uncertainty and surprise) are uniform. We present a computational account of how the relevant representations might arise, proposing a direct connection between event learning and the learning of semantic categories.
Human behavior has long been recognized to display hierarchical structure: actions fit together into subtasks, which cohere into extended goal-directed activities. Arranging actions hierarchically has well established benefits, allowing behaviors to be represented efficiently by the brain, and allowing solutions to new tasks to be discovered easily. However, these payoffs depend on the particular way in which actions are organized into a hierarchy, the specific way in which tasks are carved up into subtasks. We provide a mathematical account for what makes some hierarchies better than others, an account that allows an optimal hierarchy to be identified for any set of tasks. We then present results from four behavioral experiments, suggesting that human learners spontaneously discover optimal action hierarchies.
Thousands of functional magnetic resonance imaging (fMRI) studies have provided important insight into the human brain. However, only a handful of these studies tested infants while they were awake, because of the significant and unique methodological challenges involved. We report our efforts to address these challenges, with the goal of creating methods for awake infant fMRI that can reveal the inner workings of the developing, preverbal mind. We use these methods to collect and analyze two fMRI datasets obtained from infants during cognitive tasks, released publicly with this paper. In these datasets, we explore and evaluate data quantity and quality, task-evoked activity, and preprocessing decisions. We disseminate these methods by sharing two software packages that integrate infant-friendly cognitive tasks and eye-gaze monitoring with fMRI acquisition and analysis. These resources make fMRI a feasible and accessible technique for cognitive neuroscience in awake and behaving human infants.
Highlights d Hippocampus supports statistical learning of temporal regularities in infancy d Changes in hippocampal activity emerge after only minutes of exposure d Localization of learning effects within hippocampal system similar to adults d Exploratory analyses suggest a selective role for medial prefrontal cortex
Attention prioritizes information that is most relevant to current behavioral goals. This prioritization can be accomplished by amplifying neural responses to goal-relevant information and by strengthening coupling between regions involved in processing this information. Such modulation occurs within and between areas of visual cortex, and relates to behavioral effects of attention on perception. However, attention also has powerful effects on learning and memory behavior, suggesting that similar modulation may occur for memory systems. We used fMRI to investigate this possibility, examining how visual information is prioritized for processing in the medial temporal lobe (MTL). We hypothesized that the way in which ventral visual cortex couples with MTL input structures will depend on the kind of information being attended. Indeed, visual cortex was more coupled with parahippocampal cortex when scenes were attended and more coupled with perirhinal cortex when faces were attended. This switching of MTL connectivity was more pronounced for visual voxels with weak selectivity, suggesting that connectivity might help disambiguate sensory signals. These findings provide an initial window into an attentional mechanism that could have consequences for learning and memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.