In contrast to classical views of working memory (WM) maintenance, recent research investigating activity-silent neural states has demonstrated that persistent neural activity in sensory cortices is not necessary for active maintenance of information in WM. Previous studies in humans have measured putative memory representations indirectly, by decoding memory contents from neural activity evoked by a neutral impulse stimulus. However, it is unclear whether memory contents can also be decoded in different species and attentional conditions. Here, we employ a cross-species approach to test whether auditory memory contents can be decoded from electrophysiological signals recorded in different species. Awake human volunteers (N = 21) were exposed to auditory pure tone and noise burst stimuli during an auditory sensory memory task using electroencephalography. In a closely matching paradigm, anesthetized female rats (N = 5) were exposed to comparable stimuli while neural activity was recorded using electrocorticography from the auditory cortex. In both species, the acoustic frequency could be decoded from neural activity evoked by pure tones as well as neutral frozen noise burst stimuli. This finding demonstrates that memory contents can be decoded in different species and different states using homologous methods, suggesting that the mechanisms of sensory memory encoding are evolutionarily conserved across species.
Recent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations. It has also been shown that predictive mechanisms in the auditory system demonstrate similar tonotopic organization of neural activity as that elicited by the perceived stimuli. However, it remains unclear if the mnemonic and predictive information can be decoded from cortical activity simultaneously and from overlapping neural populations. Here, we recorded neural activity using electrocorticography (ECoG) in the auditory cortex of anesthetized rats while exposed to repeated stimulus sequences, where events within the sequence were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulse at overlapping latencies but linked to largely independent neural populations. We also demonstrate that predictive representations are learned over the course of stimulation at two distinct time scales, reflected in two dissociable time windows of neural activity. These results establish a valuable tool for investigating the neural mechanisms of passive sequence learning, memory encoding, and prediction mechanisms within a single paradigm, and provide novel evidence for learning predictive representations even under anaesthesia.
Perception is sensitive to statistical regularities in the environment, including temporal characteristics of sensory inputs. Interestingly, temporal patterns implicitly learned within one modality can also be recognised in another modality. However, it is unclear how cross-modal learning transfer affects neural responses to sensory stimuli. Here, we recorded neural activity of human volunteers (N=24, 12 females, 12 males) using electroencephalography (EEG), while participants were exposed to brief sequences of randomly-timed auditory or visual pulses. Some trials consisted of a repetition of the temporal pattern within the sequence, and subjects were tasked with detecting these trials. Unknown to the participants, some trials reappeared throughout the experiment, inducing implicit learning. Replicating previous behavioural findings, we showed that participants benefit from temporal information learned in one modality, and that they can apply this information to stimuli presented in another modality. Furthermore, using an analysis of EEG response learning curves, we showed that learning temporal structures within modalities modulates single-trial EEG response amplitudes, and that these effects could be localised to modality-specific cortical regions. Furthermore, learning transfer across modalities was associated with modulations of single-trial EEG response amplitudes, as well as beta-band power in the right inferior frontal gyrus. The neural effects of learning transfer were similar both when temporal information learned in audition was transferred to visual stimuli and vice versa. Thus, both modality-specific mechanisms for learning of temporal information, and general mechanisms which mediate learning transfer across modalities, have distinct physiological signatures that are observable in the EEG.Significance statementTemporal patterns governing sensory stimuli can be extracted and used to optimise perceptual processing. However, it is unclear what brain mechanisms mediate the learning of temporal information within a sensory modality, and how the effects of learning can be applied to another modality. Here, we presented auditory and visual stimuli to human participants while recording their brain activity using electroencephalography (EEG). We observed behavioural benefits and neural signatures of subconscious temporal pattern learning within a sensory modality, as well as transfer of patterns from one modality to another (audition to vision and vice versa). Interestingly, the neural correlates of temporal learning within modalities relied on modality-specific brain regions, while learning transfer affected activity in frontal regions, suggesting distinct mechanisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.