Natural environments convey information through multiple sensory modalities, all of which contribute to people's percepts. Although it has been shown that visual or auditory content of scene categories can be decoded from brain activity, it remains unclear how humans represent scene information beyond a specific sensory modality domain. To address this question, we investigated how categories of scene images and sounds are represented in several brain regions. A group of healthy human subjects (both sexes) participated in the present study, where their brain activity was measured with fMRI while viewing images or listening to sounds of different real-world environments. We found that both visual and auditory scene categories can be decoded not only from modality-specific areas, but also from several brain regions in the temporal, parietal, and prefrontal cortex (PFC). Intriguingly, only in the PFC, but not in any other regions, categories of scene images and sounds appear to be represented in similar activation patterns, suggesting that scene representations in PFC are modality-independent. Furthermore, the error patterns of neural decoders indicate that category-specific neural activity patterns in the middle and superior frontal gyri are tightly linked to categorization behavior. Our findings demonstrate that complex scene information is represented at an abstract level in the PFC, regardless of the sensory modality of the stimulus. Our experience in daily life includes multiple sensory inputs, such as images, sounds, or scents from the surroundings, which all contribute to our understanding of the environment. Here, for the first time, we investigated where and how in the brain information about the natural environment from multiple senses is merged to form modality-independent representations of scene categories. We show direct decoding of scene categories across sensory modalities from patterns of neural activity in the prefrontal cortex (PFC). We also conclusively tie these neural representations to human categorization behavior by comparing patterns of errors between a neural decoder and behavior. Our findings suggest that PFC is a central hub for integrating sensory information and computing modality-independent representations of scene categories.
Statistical learning allows us to discover myriad structures in our environment, which is saturated with information at many different levels—from items to categories. How do children learn different levels of information—about regularities that pertain to items and the categories they come from—and how does this differ from adults? Studies on category learning and memory have suggested that children may be more focused on items than adults. If this is also the case for statistical learning, children may not extract and learn the multi‐level regularities that adults can. We report three experiments showing that children and adults extract both item‐ and category‐level regularities in statistical learning. In Experiments 1 and 2, we show that both children and adults can learn structure at the item and category levels when they are measured independently. In Experiment 3, we show that both children and adults learn about categories even when exposure does not require this: both are able to generalize their learning from the item to the category level. Results indicate that statistical learning operates across multi‐level structure in children and adults alike, enabling generalization of learning from specific items to exemplars from categories of those items that observers have never seen. Even though children may be more focused on items during other forms of learning, they learn about categories from item‐level input during statistical learning.
Natural environments convey information through multiple sensory modalities, all of which contribute to people's percepts. Although it has been shown that neural representations of visual content can be decoded from the visual cortex, it remains unclear where and how humans represent perceptual information at a conceptual level, not limited to a specific sensory modality. To address this question, we investigated how categories of scene images and sounds are represented in several brain regions. We found that both visual and auditory scene categories can be decoded not only from modality-specific areas, but also from several brain regions in the temporal, parietal, and prefrontal cortex. Intriguingly, only in the prefrontal cortex, but not in any other regions, categories of scene images and sounds appear to be represented in similar activation patterns, suggesting that scene representations in prefrontal cortex are modality-independent. Furthermore, the error patterns of neural decoders indicate that the category-specific neural activity patterns in the middle and superior frontal gyri are tightly linked to categorization behavior. Our findings suggest that complex visual information is represented at a conceptual level in prefrontal cortex, regardless of the sensory modality of the stimulus.
It has been shown that attention can modulate the processing of a stimulus, even when it is invisible (Bahrami, Carmel, Walsh, Rees, & Lavie, 2008, Perception, 37, 1520-1528). Previous studies, however, investigated the effect of spatial attention on the processing of only invisible items. Thus, it remains unclear how the effect of spatial attention is distributed over visible and invisible items when these items are simultaneously attended at the same location. In the present study we addressed this question using two types of adapters, one visible and one invisible, and compared how attention affected the processing of each adapter. Moving gratings and tilted gratings were presented to each eye; the moving ones were dominant over the tilted ones. Both types of stimuli were located on the left and right sides of a fixation cross, and the participants performed a task that modulated their attention to one side or the other. In experiment 1 they were asked to detect the contrast decrement of one of the moving gratings, and in experiment 2 they detected a dot that was presented to both eyes. We found that attention increased the amount of motion aftereffects induced by the visible adapters. However, we did not find effects of attention on tilt aftereffects from the invisible adapters. Finally, in experiment 3 we found that attention successfully increased the amount of tilt aftereffects when the adapters were not suppressed. These findings suggest that spatial attention is more likely to influence visible items than invisible items in the same location. We also found that invisible items do not interfere with the attentional modulation of the processing of visible items.
One critical feature of children′s cognition is their relatively immature attention. Decades of research have shown that children′s attentional abilities mature slowly over the course of development, including the ability to filter out distracting information. Despite such rich behavioral literature, little is known about how developing attentional abilities modulate neural representations in children. This information is critical to understanding exactly how attentional development shapes the way children process information. One intriguing possibility is that attention might be less likely to impact neural representations in children as compared with adults. In particular, representations of attended items may be less likely to be sharpened relative to unattended items in children as compared to adults. To investigate this possibility, we measured brain activity using fMRI while adults (21-31 years) and children (7-9 years) performed a one-back working memory task in which they were directed to attend to either motion direction or an object in a complex display where both were present. We used multivoxel pattern analysis and compared decoding accuracy of attended and unattended information. Consistent with attentional sharpening, we found higher decoding accuracy for task-relevant information (i.e., objects in the object-attended condition) than for task-irrelevant information (i.e., motion in the object-attended condition) in adults– visual cortices. However, in children′s visual cortices, both task-relevant and task-irrelevant information were decoded equally well. What′s more, exploratory whole-brain analysis showed that the children represent task–irrelevant information more than adults in multiple regions across the brain, including the prefrontal cortex. These findings show that 1) attention does not sharpen neural representations in the child visual cortex, and further 2) that the developing brain can represent more information than the adult brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.