The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion.
After we listen to a series of words, we can silently replay them in our mind. Does this mental replay involve a reactivation of our original perceptual dynamics? We recorded electrocorticographic (ECoG) activity across the lateral cerebral cortex as people heard and then mentally rehearsed spoken sentences. For each region, we tested whether silent rehearsal of sentences involved reactivation of sentence-specific representations established during perception or transformation to a distinct representation. In sensorimotor and premotor cortex, we observed reliable and temporally precise responses to speech; these patterns transformed to distinct sentence-specific representations during mental rehearsal. In contrast, we observed less reliable and less temporally precise responses in prefrontal and temporoparietal cortex; these higher-order representations, which were sensitive to sentence semantics, were shared across perception and rehearsal of the same sentence. The mental rehearsal of natural speech involves the transformation of stimulus-locked speech representations in sensorimotor and premotor cortex, combined with diffuse reactivation of higher-order semantic representations.
Statistical learning refers to the process of extracting regularities from the world without feedback. What are the necessary conditions for statistical learning to arise? It has been argued that visual statistical learning (VSL) is “automatic”, such that subjects will passively and even unconsciously extract statistical regularities from streams of visual input as long as they attend to the stimuli. In contrast, our data indicate that simply attending to stimuli is not, on its own, sufficient for learning. In Experiments 1 & 2, we provided incidental exposure to regularities in a stream of images and observed little to zero VSL across a range of conditions. In Experiment 3, we found that explicitly instructing participants to seek regularities dramatically improved their performance on direct measures of learning, but not on an indirect response time measure. Finally, in Experiments 4 & 5, we demonstrated that a methodological confound in prior work using the indirect response time measure could account for some previous evidence of automatic and implicit VSL.Overall, we found very little evidence of learning using direct measures of VSL, and no evidence of learning using an indirect response time measure. Participants who recognized visual sequence regularities in a forced-choice task could also often recreate the sequences when explicitly probed, indicating their knowledge was not entirely implicit. We suggest that some form of active engagement with stimuli may be needed to extract sequential regularities, and that VSL does not occur automatically.
Investigation, K.M. and K.H.; 10 Resources, K.M., K.M.T., T.A.V., and C.J.H.; Writing -Original Draft, K.M., K.H., K.M.T., T.A.V., 11 and C.J.H.; Visualization, K.M., K.M.T., and C.J.H.; Supervision, T.A.V. and C.J.H.; Funding 12 Acquisition, K.M., K.M.T., T.A.V. and C.J.H. Summary 1After we listen to a series of words, we can silently replay them in our mind. Does this mental 2 replay involve a re-activation of our original perceptual representations? We recorded 3 electrocorticographic (ECoG) activity across the lateral cerebral cortex as people heard and then 4 mentally rehearsed spoken sentences. For each region, we tested whether silent rehearsal of 5 sentences involved reactivation of sentence-specific representations established during 6 perception or transformation to a distinct representation. In sensorimotor and premotor 7 cortex, we observed reliable and temporally precise responses to speech; these patterns 8 transformed to distinct sentence-specific representations during mental rehearsal. In contrast, 9we observed slower and less reliable responses in prefrontal and temporoparietal cortex; these 10 higher-order representations, which were sensitive to sentence semantics, were shared across 11 perception and rehearsal. The mental rehearsal of natural speech involves the transformation 12 of time-resolved speech representations in sensorimotor and premotor cortex, combined with 13 diffuse reactivation of higher-order semantic representations. 14 Keywords: ECoG, sentence repetition, verbal short-term memory, subvocal rehearsal 15 premotor cortex (dPMC) of the left hemisphere. Furthermore, increased activation in these 1 areas during silent rehearsal predicted more accurate behavioral recall of the sentence content. 2 Consistent with prior literature (e.g., Cheung et al., 2016;Glanz et al., 2018) the SMC and dPMC 3 responded rapidly during sentence perception, encoding sub-second properties of the input. 4The fidelity of sensory responses in SMC and dPMC was exceeded only by the superior 5 temporal gyrus (STG) and middle temporal gyrus (MTG). When sentences were silently 6 rehearsed, SMC and dPMC again exhibited sentence-specific activity patterns, but the activity 7 patterns were distinct from those observed during perception of the same sentences. 8Altogether, the data support a model in which "motor" circuitry (SMC and PMC) supports 9 verbal short-term memory via a sensorimotor transformation (Cogan et al., 2014). 10We also observed sentence-specific activity in anterior prefrontal cortex (aPFC) and 11 temporoparietal cortex (TPJ). Sentence-specific activity in these areas was less temporally 12 precise and less reliable than in sensory or motor areas. However, patterns in prefrontal areas 13 were sensitive to the contextual meaning of the sentence being rehearsed. Moreover, the 14 representations in these high-level areas were not transformed, but were instead shared across 15 the perception and rehearsal of specific sentences. Activation in these higher order areas is 16 therefore consistent with a...
Humans can extract regularities from their environment, enabling them to recognize and predict sequences of events. The process of regularity extraction is called ‘statistical learning’ and is generally thought to occur rapidly and automatically; that is, regularities are extracted from repeated stimulus presentations, without intent or awareness, as long as the stimuli are attended. We hypothesized that visual statistical learning is not entirely automatic, even when stimuli are attended, and that the learning depends on the extent to which viewers process the relationships between stimuli. To test this, we measured statistical learning performance across seven conditions in which participants (N=774) viewed image sequences. As task instructions across conditions increasingly required participants to attend to relationships between stimuli, their learning performance increased from chance to robust levels. We conclude that the learning observed in visual statistical learning paradigms is, for the most part, not automatic and requires more than passively attending to stimuli.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.