The human brain demonstrates complex yet systematic patterns of neural activity at rest. We examined whether functional connectivity among those brain regions typically active during rest depends on ongoing and recent task demands and individual differences. We probed the temporal coordination among these regions during periods of language comprehension and during the rest periods that followed comprehension. Our findings show that the topography of this ''rest network'' varies with exogenous processing demands. The network encompassed more highly interconnected regions during rest than during listening, but also when listening to unsurprising vs. surprising information. Furthermore, connectivity patterns during rest varied as a function of recent listening experience. Individual variability in connectivity strength was associated with cognitive function: more attentive comprehenders demonstrated weaker connectivity during language comprehension, and a greater differentiation between connectivity during comprehension and rest. The regions we examined have generally been thought to form an invariant physiological and functional network whose activity reflects spontaneous cognitive processes. Our findings suggest that their function extends beyond the mediation of unconstrained thought, and that they play an important role in higher-level cognitive function.connectivity ͉ default mode ͉ individual differences ͉ stochastic ͉ fMRI
Everyday communication is accompanied by visual information from several sources, including cospeech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, she made semantically unrelated hand movements. In the third, she kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.Keywords discourse comprehension; fMRI; gestures; semantic processing; inferior frontal gyrus Face-to-face communication is based on more than speech alone. Audible speech is only one component of a communication system that also includes co-speech gestures-hand and arm movements that accompany spoken language (Kendon, 1994;McNeill, 1992; McNeill, 2005). Such co-speech gestures serve an important role in face-to-face communication for both speaker and listener. Listeners not only process the words that speakers produce, but also continuously integrate gestures with speech and with other visual information (e.g., the speaker's lips, mouth, and eyes) to arrive at the speaker's meaning (Goldin-Meadow, 2006;Kendon, 1994; McNeill, 2005). Despite the importance of co-speech gesture to communicative
Sentences are the primary means by which people communicate information. The information conveyed by a sentence depends on how that sentence relates to what is already known. We conducted an fMRI study to determine how the brain establishes and retains this information. We embedded sentences in contexts that rendered them more or less informative and assessed which functional networks were associated with comprehension of these sentences and with memory for their content. We identified two such networks: A frontotemporal network, previously associated with working memory and language processing, showed greater activity when sentences were informative. Independently, greater activity in this network predicted subsequent memory for sentence content. In a separate network, previously associated with resting-state processes and generation of internal thoughts, greater neural activity predicted subsequent memory for informative sentences but also predicted subsequent forgetting for less-informative sentences. These results indicate that in the brain, establishing the information conveyed by a sentence, that is, its contextually based meaning, involves two dissociable networks, both of which are related to processing of sentence meaning and its encoding to memory.
Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
Is there a neural representation of speech that transcends its sensory properties? Using fMRI, we investigated whether there are brain areas where neural activity during observation of sublexical audiovisual input corresponds to a listener's speech percept (what is "heard") independent of the sensory properties of the input. A target audiovisual stimulus was preceded by stimuli that (1) shared the target's auditory features (auditory overlap), (2) shared the target's visual features (visual overlap), or (3) shared neither the target's auditory or visual features but were perceived as the target (perceptual overlap). In two left-hemisphere regions (pars opercularis, planum polare), the target invoked less activity when it was preceded by the perceptually overlapping stimulus than when preceded by stimuli that shared one of its sensory components. This pattern of neural facilitation indicates that these regions code sublexical speech at an abstract level corresponding to that of the speech percept.
The capacity for assessing the degree of uncertainty in the environment relies on estimating statistics of temporally unfolding inputs. This, in turn, allows calibration of predictive and bottom-up processing, and signalling changes in temporally unfolding environmental features. In the last decade, several studies have examined how the brain codes for and responds to input uncertainty. Initial neurobiological experiments implicated frontoparietal and hippocampal systems, based largely on paradigms that manipulated distributional features of visual stimuli. However, later work in the auditory domain pointed to different systems, whose activation profiles have interesting implications for computational and neurobiological models of statistical learning (SL). This review begins by briefly recapping the historical development of ideas pertaining to the sensitivity to uncertainty in temporally unfolding inputs. It then discusses several issues at the interface of studies of uncertainty and SL. Following, it presents several current treatments of the neurobiology of uncertainty and reviews recent findings that point to principles that serve as important constraints on future neurobiological theories of uncertainty, and relatedly, SL. This review suggests it may be useful to establish closer links between neurobiological research on uncertainty and SL, considering particularly mechanisms sensitive to local and global structure in inputs, the degree of input uncertainty, the complexity of the system generating the input, learning mechanisms that operate on different temporal scales and the use of learnt information for online prediction.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'.
Coding for the degree of disorder in a temporally unfolding sensory input allows for optimized encoding of these inputs via information compression and predictive processing. Prior neuroimaging work has examined sensitivity to statistical regularities within single sensory modalities and has associated this function with the hippocampus, anterior cingulate, and lateral temporal cortex. Here we investigated to what extent sensitivity to input disorder, quantified by Markov entropy, is subserved by modality-general or modality-specific neural systems when participants are not required to monitor the input. Participants were presented with rapid (3.3 Hz) auditory and visual series varying over four levels of entropy, while monitoring an infrequently changing fixation cross. For visual series, sensitivity to the magnitude of disorder was found in early visual cortex, the anterior cingulate, and the intraparietal sulcus. For auditory series, sensitivity was found in inferior frontal, lateral temporal, and supplementary motor regions implicated in speech perception and sequencing. Ventral premotor and central cingulate cortices were identified as possible candidates for modality-general uncertainty processing, exhibiting marginal sensitivity to disorder in both modalities. The right temporal pole differentiated the highest and lowest levels of disorder in both modalities, but did not show general sensitivity to the parametric manipulation of disorder. Our results indicate that neural sensitivity to input disorder relies largely on modality-specific systems embedded in extended sensory cortices, though uncertainty-related processing in frontal regions may be driven by both input modalities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.