Abstractness and modality of interpersonal communication have a considerable impact on comprehension. They are relevant for determining thoughts and constituting internal models of the environment. Whereas concrete object-related information can be represented in mind irrespective of language, abstract concepts require a representation in speech. Consequently, modality-independent processing of abstract information can be expected. Here we investigated the neural correlates of abstractness (abstract vs. concrete) and modality (speech vs. gestures), to identify an abstractness-specific supramodal neural network. During fMRI data acquisition 20 participants were presented with videos of an actor either speaking sentences with an abstract-social [AS] or concrete-object-related content [CS], or performing meaningful abstract-social emblematic [AG] or concrete-object-related tool-use gestures [CG]. Gestures were accompanied by a foreign language to increase the comparability between conditions and to frame the communication context of the gesture videos. Participants performed a content judgment task referring to the person vs. object-relatedness of the utterances. The behavioral data suggest a comparable comprehension of contents communicated by speech or gesture. Furthermore, we found common neural processing for abstract information independent of modality (AS > CS ∩ AG > CG) in a left hemispheric network including the left inferior frontal gyrus (IFG), temporal pole, and medial frontal cortex. Modality specific activations were found in bilateral occipital, parietal, and temporal as well as right inferior frontal brain regions for gesture (G > S) and in left anterior temporal regions and the left angular gyrus for the processing of speech semantics (S > G). These data support the idea that abstract concepts are represented in a supramodal manner. Consequently, gestures referring to abstract concepts are processed in a predominantly left hemispheric language related neural network.
The semantic integration between gesture and speech (GSI) is mediated by the left posterior temporal sulcus/middle temporal gyrus (pSTS/MTG) and the left inferior frontal gyrus (IFG). Evidence from electroencephalography (EEG) suggests that oscillations in the alpha and beta bands may support processes at different stages of GSI. In the present study, we investigated the relationship between electrophysiological oscillations and blood-oxygen-level-dependent (BOLD) activity during GSI. In a simultaneous EEG-fMRI study, German participants (n = 19) were presented with videos of an actor either performing meaningful gestures in the context of a comprehensible German (GG) or incomprehensible Russian sentence (GR), or just speaking a German sentence (SG). EEG results revealed reduced alpha and beta power for the GG vs. SG conditions, while fMRI analyses showed BOLD increase in the left pSTS/MTG for GG > GR ∩ GG > SG. In time-window-based EEG-informed fMRI analyses, we further found a positive correlation between single-trial alpha power and BOLD signal in the left pSTS/MTG, the left IFG, and several sub-cortical regions. Moreover, the alpha-pSTS/MTG correlation was observed in an earlier time window in comparison to the alpha-IFG correlation, thus supporting a two-stage processing model of GSI. Our study shows that EEG-informed fMRI implies multiple roles of alpha oscillations during GSI, and that the method is a best candidate for multidimensional investigations on complex cognitive functions such as GSI.
Efference copy‐based forward model mechanisms may help us to distinguish between self‐generated and externally‐generated sensory consequences. Previous studies have shown that self‐initiation modulates neural and perceptual responses to identical stimulation. For example, event‐related potentials (ERPs) elicited by tones that follow a button press are reduced in amplitude relative to ERPs elicited by passively attended tones. However, previous EEG studies investigating visual stimuli in this context are rare, provide inconclusive results, and lack adequate control conditions with passive movements. Furthermore, although self‐initiation is known to modulate behavioral responses, it is not known whether differences in the amplitude of ERPs also reflect differences in perception of sensory outcomes. In this study, we presented to participants visual stimuli consisting of gray discs following either active button presses, or passive button presses, in which an electromagnet moved the participant's finger. Two discs presented visually 500–1250 ms apart followed each button press, and participants judged which of the two was more intense. Early components of the primary visual response (N1 and P2) over the occipital electrodes were suppressed in the active condition. Interestingly, suppression in the intensity judgment task was only correlated with suppression of the visual P2 component. These data support the notion of efference copy‐based forward model predictions in the visual sensory modality, but especially later processes (P2) seem to be perceptually relevant. Taken together, the results challenge the assumption that N1 differences reflect perceptual suppression and emphasize the relevance of the P2 ERP component.
Language and action have been thought of as closely related. Comprehending words or phrases that are related to actions commonly activates motor and premotor areas, and this comprehension process interacts with action preparation and/or execution. However, it remains unclear whether comprehending action-related language interacts with action observation. In the current study, we examined whether the observation of tool-use gesture subjects to interaction with language. In an electroencephalography (EEG) study (n = 20), participants were presented with video clips of an actor performing tool-use (TU, e.g., hammering with a fist) and emblematic (EM, e.g., the thumb up sign for 'good job') gestures accompanied by either comprehensible German (G) or incomprehensible Russian sentences (R). Participants performed a semantic judging task, evaluating whether the co-speech gestures were object- or socially-related. Behavioral results from the semantic task showed faster response for the TU versus EM gestures only in the German condition. For EEG, we found that TU elicited beta power decrease (~ 20 Hz) when compared to EM gestures, however this effect was reduced when gestures were accompanied by German instead of Russian sentences. We concluded that the processing of action-related sentences might facilitate gesture observation, in the sense that motor simulation required for TU gestures, as indexed by reduced beta power, was modulated when accompanied by comprehensible German speech. Our results corroborate the functional role of the beta oscillations during perception of hand gestures, and provide novel evidence concerning language-motor interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.