Speech is an important carrier of emotional information. However, little is known about how different vocal emotion expressions are recognized in a receiver's brain. We used multivariate pattern analysis of functional magnetic resonance imaging data to investigate to which degree distinct vocal emotion expressions are represented in the receiver's local brain activity patterns. Specific vocal emotion expressions are encoded in a right fronto-operculo-temporal network involving temporal regions known to subserve suprasegmental acoustic processes and a fronto-opercular region known to support emotional evaluation, and, moreover, in left temporo-cerebellar regions covering sequential processes. The right inferior frontal region, in particular, was found to differentiate distinct emotional expressions. The present analysis reveals vocal emotion to be encoded in a shared cortical network reflected by distinct brain activity patterns. These results shed new light on theoretical and empirical controversies about the perception of distinct vocal emotion expressions at the level of large-scale human brain signals.
In language processing, the relative contribution of early sensory and higher cognitive brain areas is still an open issue. A recent controversial hypothesis proposes that sensory cortices show sensitivity to syntactic processes, whereas other studies suggest a wider neural network outside sensory regions. The goal of the current event-related fMRI study is to clarify the contribution of sensory cortices in auditory syntactic processing in a 2 × 2 design. Two-word utterances were presented auditorily and varied both in perceptual markedness (presence or absence of an overt word category marking "-t"), and in grammaticality (syntactically correct or incorrect). A multivariate pattern classification approach was applied to the data, flanked by conventional cognitive subtraction analyses. The combination of methods and the 2 × 2 design revealed a clear picture: The cognitive subtraction analysis found initial syntactic processing signatures in a neural network including the left IFG, the left aSTG, the left superior temporal sulcus (STS), as well as the right STS/STG. Classification of local multivariate patterns indicated the left-hemispheric regions in IFG, aSTG, and STS to be more syntax-specific than the right-hemispheric regions. Importantly, auditory sensory cortices were only sensitive to the overt perceptual marking, but not to the grammaticality, speaking against syntax-inflicted sensory cortex modulations. Instead, our data provide clear evidence for a distinction between regions involved in pure perceptual processes and regions involved in initial syntactic processes.
Real-time fMRI neurofeedback is a feasible tool to learn the volitional regulation of brain activity. So far, most studies provide continuous feedback information that is presented upon every volume acquisition. Although this maximizes the temporal resolution of feedback information, it may be accompanied by some disadvantages. Participants can be distracted from the regulation task due to (1) the intrinsic delay of the hemodynamic response and associated feedback and (2) limited cognitive resources available to simultaneously evaluate feedback information and stay engaged with the task. Here, we systematically investigate differences between groups presented with different variants of feedback (continuous vs. intermittent) and a control group receiving no feedback on their ability to regulate amygdala activity using positive memories and feelings. In contrast to the feedback groups, no learning effect was observed in the group without any feedback presentation. The group receiving intermittent feedback exhibited better amygdala regulation performance when compared with the group receiving continuous feedback. Behavioural measurements show that these effects were reflected in differences in task engagement. Overall, we not only demonstrate that the presentation of feedback is a prerequisite to learn volitional control of amygdala activity but also that intermittent feedback is superior to continuous feedback presentation.
The present magnetoencephalography study investigated whether the brain states of early syntactic and auditory-perceptual processes can be decoded from single-trial recordings with a multivariate pattern classification approach. In particular, it was investigated whether the early neural activation patterns in response to rule violations in basic auditory perception and in high cognitive processes (syntax) reflect a functional organization that largely generalizes across individuals or is subject-specific. On this account, subjects were auditorily presented with correct sentences, syntactically incorrect sentences, correct sentences including an interaural time difference change, and sentences containing both violations. For the analysis, brain state decoding was carried out within and across subjects with three pairwise classifications. Neural patterns elicited by each of the violation sentences were separately classified with the patterns elicited by the correct sentences. The results revealed the highest decoding accuracies over temporal cortex areas for all three classification types. Importantly, both the magnitude and the spatial distribution of decoding accuracies for the early neural patterns were very similar for within-subject and across-subject decoding. At the same time, across-subject decoding suggested a hemispheric bias, with the most consistent patterns in the left hemisphere. Thus, the present data show that not only auditory-perceptual processing brain states but also cognitive brain states of syntactic rule processing can be decoded from single-trial brain activations. Moreover, the findings indicate that the neural patterns in response to syntactic cognition and auditory perception reflect a functional organization that is highly consistent across individuals.
Perceptual decisions not only depend on the incoming information from sensory systems but constitute a combination of current sensory evidence and internally accumulated information from past encounters. Although recent evidence emphasizes the fundamental role of prior knowledge for perceptual decision making, only few studies have quantified the relevance of such priors on perceptual decisions and examined their interplay with other decision-relevant factors, such as the stimulus properties. In the present study we asked whether hysteresis, describing the stability of a percept despite a change in stimulus property and known to occur at perceptual thresholds, also acts as a form of an implicit prior in tactile spatial decision making, supporting the stability of a decision across successively presented random stimuli (i.e., decision hysteresis). We applied a variant of the classical 2-point discrimination task and found that hysteresis influenced perceptual decision making: Participants were more likely to decide ‘same’ rather than ‘different’ on successively presented pin distances. In a direct comparison between the influence of applied pin distances (explicit stimulus property) and hysteresis, we found that on average, stimulus property explained significantly more variance of participants’ decisions than hysteresis. However, when focusing on pin distances at threshold, we found a trend for hysteresis to explain more variance. Furthermore, the less variance was explained by the pin distance on a given decision, the more variance was explained by hysteresis, and vice versa. Our findings suggest that hysteresis acts as an implicit prior in tactile spatial decision making that becomes increasingly important when explicit stimulus properties provide decreasing evidence.
The ability to detect where a person is attending is fundamental for brain-computer-interfaces. We explore how the attentional focus can be decoded from brain signals noninvasively acquired with functional magnetic resonance imaging (fMRI). Several cortical regions have previously been reported to have topographic maps reflecting the focus of visual attention. Interestingly, attentional maps were observed to be gradually less topographic when moving from early visual areas toward extra-occipital areas. Recent studies suggested that this might indicate a shift from topographically represented local processing to a global processing dominated by laterality. However, it remains unclear, to which extent the topographical representation of a region characterizes its quality to encode visuospatial attention. Here we addressed this problem by applying multivoxel pattern analysis to fMRI signals. In combination with a cortical surface-based mapping of spatial preference, our analysis revealed a broad cortical network that locally contains information about the locus of visual attention. The informative regions are not restricted to topographic areas, but even in frontal areas, where topographic organization is almost indiscernible, the attentional locus can be decoded from brain activity. Specifically, we find attentional information in the right middle frontal gyrus and the right ventrolateral prefrontal cortex. Furthermore, in these two areas information is sufficient to distinguish attentional loci within the ipsi- as well as the contra-lateral visual hemifields. The laterality dominance decreases when moving from occipital to frontal areas. Our results suggest that information about visuospatial attention is encoded beyond topographically organized regions by local patterns of brain activity
In this work we present a new open source software package offering a unified framework for the real-time adaptation of fMRI stimulation procedures. The software provides a straightforward setup and highly flexible approach to adapt fMRI paradigms while the experiment is running. The general framework comprises the inclusion of parameters from subject’s compliance, such as directing gaze to visually presented stimuli and physiological fluctuations, like blood pressure or pulse. Additionally, this approach yields possibilities to investigate complex scientific questions, for example the influence of EEG rhythms or fMRI signals results themselves. To prove the concept of this approach, we used our software in a usability example for an fMRI experiment where the presentation of emotional pictures was dependent on the subject’s gaze position. This can have a significant impact on the results. So far, if this is taken into account during fMRI data analysis, it is commonly done by the post-hoc removal of erroneous trials. Here, we propose an a priori adaptation of the paradigm during the experiment’s runtime. Our fMRI findings clearly show the benefits of an adapted paradigm in terms of statistical power and higher effect sizes in emotion-related brain regions. This can be of special interest for all experiments with low statistical power due to a limited number of subjects, a limited amount of time, costs or available data to analyze, as is the case with real-time fMRI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.