The human language faculty has been claimed to be grounded in the ability to process hierarchically structured sequences. This human ability goes beyond the capacity to process sequences with simple transitional probabilities of adjacent elements observable in non-human primates. Here we show that the processing of these two sequence types is supported by different areas in the human brain. Processing of local transitions is subserved by the left frontal operculum, a region that is phylogenetically older than Broca's area, which specifically holds responsible the computation of hierarchical dependencies. Tractography data revealing differential structural connectivity signatures for these two brain areas provide additional evidence for a segregation of two areas in the left inferior frontal cortex.
In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically.DTI ͉ fMRI ͉ hierarchical structure L anguage appears to be a trait specific to humans-at least in its core computational component, that is, grammar. Defining language as a sequence of symbols, Chomsky (1) proposed a hierarchy of grammars as language production mechanisms with increasing generative powers. The lowest-level grammar is finite state grammar (FSG). FSG can be fully specified by transition probabilities between a finite number of states (e.g., words), being not powerful enough to generate structures of natural human languages. Phrase structure grammar (PSG) has more generative power than FSG. A key difference between FSG and PSG is that only PSG can generate the sequence A n B n , where A and B denote symbols and n the number of repetitions. The ability to process the sequence A n B n is crucial for the processing of center-embedded sentences, such as ''The man the boy the dog bit greeted is my friend.'' where subjects (i.e., the man, the boy, and the dog) are A-symbols and the verbs (bit, greeted, and is) are B-symbols. Surprisingly, tests on monkeys (2) and on songbirds (3) showed that whereas songbirds can process A n B n sequences, monkeys cannot. However, even if the birds could correctly discriminate A n B n sequences from A n B m , (4 Ͼ n, m Ͼ0, n m),...
Speech is an important carrier of emotional information. However, little is known about how different vocal emotion expressions are recognized in a receiver's brain. We used multivariate pattern analysis of functional magnetic resonance imaging data to investigate to which degree distinct vocal emotion expressions are represented in the receiver's local brain activity patterns. Specific vocal emotion expressions are encoded in a right fronto-operculo-temporal network involving temporal regions known to subserve suprasegmental acoustic processes and a fronto-opercular region known to support emotional evaluation, and, moreover, in left temporo-cerebellar regions covering sequential processes. The right inferior frontal region, in particular, was found to differentiate distinct emotional expressions. The present analysis reveals vocal emotion to be encoded in a shared cortical network reflected by distinct brain activity patterns. These results shed new light on theoretical and empirical controversies about the perception of distinct vocal emotion expressions at the level of large-scale human brain signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.