Summary The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex.
Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes.
Game-based learning supported by mobile intelligence technology has promoted the renewal of teaching and learning models. Herein, a model of Question-Observation-Doing-Explanation (QODE) based on smart phones was constructed and applied to science learning during school disruption in COVID-19 pandemic. In this study, from the theoretical perspective of cognitive-affective theory of learning with media, Bandura’s motivation theory and community of inquiry model, self-report measure was used to verify the effect of students’ scientific self-efficacy and cognitive anxiety on science engagement. A total of 357 valid questionnaires were used for structural equation model research. The results indicated that two types of scientific self-efficacy, as indicated by scientific learning ability and scientific learning behavior, were negatively associated with cognitive anxiety. In addition, cognitive anxiety was also negatively correlated to four types of science engagement, as indicated by cognitive engagement, emotional engagement, behavioral engagement, and social engagement through smartphone interactions. These findings provide further evidence for game-based learning promoted by smart phones, contributing to a deeper understanding of the associations between scientific self-efficacy, cognitive anxiety, and science engagement. This study points out that the QODE model is suitable for implementing smart mobile devices to students’ science learning.
Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2–4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands.
At a cocktail party, visual cues may help a listener by showing them where or when to direct attention, what acoustic modulations a target utterance contains, and/or what articulatory gestures produce the target. Here, we investigated target speech intelligibility while varying the visual cues available in a complex, confusing auditory scene. In all cases, subjects listened for a target utterance in the presence of multiple masker utterances with similar grammatical structure spoken by the same talker. The timing and direction of the target (and maskers) varied randomly, increasing the uncertainty about where and when to focus auditory attention. The number of correctly reported target key words measured performance. Performance tended to improve as the amount of visual information increased, particularly when masker phrases came from the direction of the target. Performance was generally similar whether listeners saw full videos of the target talker from the correct direction or only a static image of the talker at the right time in the correct direction. However, temporal information about where and when the target occurred improved performance over knowing only target location. Results suggest that in these scenes, visual cues aid target understanding by indicating where and roughly when to direct attention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.