Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.
The prefrontal cortex has been extensively implicated in autism to explain defi cits in executive and other higher-order functions related to cognition, language, sociability and emotion. The possible changes at the level of the neuronal microcircuit are however not known. We studied microcircuit alterations in the prefrontal cortex in the valproic acid rat model of autism and found that the layer 5 pyramidal neurons are connected to signifi cantly more neighbouring neurons than in controls. These excitatory connections are more plastic displaying enhanced long-term potentiation of the strength of synapses. The microcircuit alterations found in the prefrontal cortex are therefore similar to the alterations previously found in the somatosensory cortex. Hyper-connectivity and hyper-plasticity in the prefrontal cortex implies hyper-functionality of one of the highest order processing regions in the brain, and stands in contrast to the hypo-functionality that is normally proposed in this region to explain some of the autistic symptoms. We propose that a number of defi cits in autism such as sociability, attention, multi-tasking and repetitive behaviours, should be re-interpreted in the light of a hyperfunctional prefrontal cortex.
Communication signals are important for social interactions and survival and are thought to receive specialized processing in the visual and auditory systems. Whereas the neural processing of faces by face clusters and face cells has been repeatedly studied [1-5], less is known about the neural representation of voice content. Recent functional magnetic resonance imaging (fMRI) studies have localized voice-preferring regions in the primate temporal lobe [6, 7], but the hemodynamic response cannot directly assess neurophysiological properties. We investigated the responses of neurons in an fMRI-identified voice cluster in awake monkeys, and here we provide the first systematic evidence for voice cells. "Voice cells" were identified, in analogy to "face cells," as neurons responding at least 2-fold stronger to conspecific voices than to "nonvoice" sounds or heterospecific voices. Importantly, whereas face clusters are thought to contain high proportions of face cells [4] responding broadly to many faces [1, 2, 4, 5, 8-10], we found that voice clusters contain moderate proportions of voice cells. Furthermore, individual voice cells exhibit high stimulus selectivity. The results reveal the neurophysiological bases for fMRI-defined voice clusters in the primate brain and highlight potential differences in how the auditory and visual systems generate selective representations of communication signals.
Social animals can identify conspecifics by many forms of sensory input. However, whether the neuronal computations that support our ability to identify individuals rely on modality-independent convergence or involve ongoing synergistic interactions along the multiple sensory streams remains controversial. Direct neuronal measurements at relevant brain sites could address such questions, but this requires better bridging the work in humans and animal models. We overview recent studies in nonhuman primates on voice- and face-identity sensitive pathways and evaluate the correspondences to relevant findings in humans. This synthesis provides insights into converging sensory streams in the primate anterior temporal lobe for identity processing. Furthermore, we advance a model and suggest how alternative neuronal mechanisms could be tested.
When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face-and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face-voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions.H ow the brain parses multisensory input despite the variable and often large differences in the onset of sensory signals across different modalities remains unclear. We can maintain a coherent multisensory percept across a considerable range of spatial and temporal discrepancies (1-4): For example, auditory and visual speech signals can be perceived as belonging to the same multisensory "object" over temporal windows of hundreds of milliseconds (5-7). However, such misalignment can drastically affect neuronal responses in ways that may also differ between brain regions (8-10). We asked how natural asynchronies in the onset of face/voice content in communication signals would affect voice-sensitive cortex, a region in the ventral "object" pathway (11) where neurons (i) are selective for auditory features in communication sounds (12)(13)(14), (ii) are influenced by visual "face" content (12), and (iii) display relatively slow and temporally variable responses in comparison with neurons in primary auditory cortical or subcortical structures (14-16).Neurophysiological studies in human and nonhuman animals have provided considerable insights into the role of cortical oscillations during multisensory conditions and for parsing speech. Cortical oscillations entrain to the slow temporal dynamics of natural sounds (17)(18)(19)(20) and are thought to reflect the excitability of local networks to sensory inputs...
Rapid progress in technologies such as calcium imaging and electrophysiology has seen a dramatic increase in the size and extent of neural recordings. Even so, interpretation of this data requires considerable knowledge about the nature of the representation and often depends on manual operations. Decoding provides a means to infer the information content of such recordings but typically requires highly processed data and prior knowledge of the encoding scheme. Here, we developed a deep-learning framework able to decode sensory and behavioral variables directly from wide-band neural data. The network requires little user input and generalizes across stimuli, behaviors, brain regions, and recording techniques. Once trained, it can be analyzed to determine elements of the neural code that are informative about a given variable. We validated this approach using electrophysiological and calcium-imaging data from rodent auditory cortex and hippocampus as well as human electrocorticography (ECoG) data. We show successful decoding of finger movement, auditory stimuli, and spatial behaviors – including a novel representation of head direction - from raw neural activity.
While animals navigating the real world face a barrage of complex sensory input, their brains have evolved to perceptually compress multidimensional information by selectively extracting the features relevant for survival. For instance, communication signals supporting social interactions in several mammalian species consist of acoustically complex sequences of vocalizations, however little is known about what information listeners extract from such timevarying sensory streams. Here, we utilize female mice's natural behavioural response to male courtship songs to evaluate the relevant acoustic dimensions used in their social decisions. We found that females were highly sensitive to disruptions of song temporal regularity, and preferentially approached playbacks of intact male songs over rhythmically irregular versions of the songs. In contrast, female behaviour was invariant to manipulations affecting the songs' sequential organization, or the spectrotemporal structure of individual syllables. The results reveal temporal regularity as a key acoustic cue extracted by mammalian listeners from complex vocal sequences during goal-directed social behaviour.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.