Speech-sign or "bimodal" bilingualism is exceptional because distinct modalities allow for simultaneous production of two languages. We investigated the ramifications of this phenomenon for models of language production by eliciting language mixing from eleven hearing native users of American Sign Language (ASL) and English. Instead of switching between languages, bilinguals frequently produced code-blends (simultaneously produced English words and ASL signs). Code-blends resembled co-speech gesture with respect to synchronous vocal-manual timing and semantic equivalence. When ASL was the Matrix Language, no single-word code-blends were observed, suggesting stronger inhibition of English than ASL for these proficient bilinguals. We propose a model that accounts for similarities between co-speech gesture and code-blending and assumes interactions between ASL and English Formulators. The findings constrain language production models by demonstrating the possibility of simultaneously selecting two lexical representations (but not two propositions) for linguistic expression and by suggesting that lexical suppression is computationally more costly than lexical selection.
Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating ''be quiet''), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocalauditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are
Bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives. The regular need to select a target language is argued to enhance executive control. We investigated whether this enhancement stems from a general effect of bilingualism (the representation of two languages) or from a modality constraint that forces language selection. Bimodal bilinguals can, but do not always, sign and speak at the same time. Their two languages involve distinct motor and perceptual systems, leading to weaker demands on language control. We compared the performance of 15 monolinguals, 15 bimodal bilinguals, and 15 unimodal bilinguals on a set of flanker tasks. There were no group differences in accuracy, but unimodal bilinguals were faster than the other groups; bimodal bilinguals did not differ from monolinguals. These results trace the bilingual advantage in cognitive control to the unimodal bilingual's experience controlling two languages in the same modality.A growing number of studies have reported advantages in nonverbal executive control tasks for bilingual children (Bialystok, 2001;Carlson & Meltzoff, 2008;Mezzacappa, 2004) and adults (Bialystok, Craik, Klein, & Viswanathan, 2004;Bialystok, Craik, & Ryan, 2006;Costa, Hernandez, & Sebastián-Gallés, 2008). One explanation for this enhancement is that the regular use of two languages requires a mechanism to control attention and select the target language-an experience that may enhance a general control mechanism. Evidence from neuroimaging and patient studies suggests that the same neural regions (e.g., dorsolateral prefrontal and anterior cingulate cortices) are engaged during both languageswitching tasks and nonverbal control tasks, supporting the interpretation that the mechanism for language control and selection is domain general (Fabbro, Skrap, & Aglioti, 2000;Fan, Flombaum, McCandliss, Thomas, & Posner, 2003;Hernandez, Dapretto, Mazziotta, & Bookheimer, 2001;Rodriguez-Fornells et al., 2005).We investigate whether the bilingual advantage in executive control stems from the conflict that arises from the need to select only one language for production or from the bilingual's representation of two language systems. Bilinguals who know two spoken languages (unimodal bilinguals) cannot produce two words at the same time; that is, they cannot simultaneously say dog and perro. In contrast, bimodal bilinguals who know both a spoken and a signed language can produce lexical items from both languages at the same time (Emmorey, Borinstein, Thompson, & Gollan, 2008 In contrast to this view, the bilingual advantage could follow from a modality-independent effect of having two language representational systems. Bilinguals are well-practiced and experienced with coding a single lexical concept in two languages. Consistent with this experience, bilingual children show enhancements on dimensional card-sorting tasks that require the same concept to be re-coded in a different way (Bialystok, 1999;Bialystok & Martin, 2004) an...
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported.
We investigated whether variation in auditory experience in humans during development alters the macroscopic neuroanatomy of primary or auditory association cortices. Volumetric analyses were based on MRI data from 25 congenitally deaf subjects and 25 hearing subjects, all right-handed. The groups were matched for gender and age. Gray and white matter volumes were determined for the temporal lobe, superior temporal gyrus, Heschl's gyrus (HG), and the planum temporale. Deaf and hearing subjects did not differ in the total volume or the gray matter volume of HG, which suggests that auditory deafferentation does not lead to cell loss within primary auditory cortex in humans. However, deaf subjects had significantly larger gray matter-white matter ratios than hearing subjects in HG, with deaf subjects exhibiting significantly less white matter in both left and right HG. Deaf subjects also had higher gray matter-white matter ratios in the rest of the superior temporal gyrus, but this pattern was not observed for the temporal lobe as a whole. These findings suggest that auditory deprivation from birth results in less myelination and͞or fewer fibers projecting to and from auditory cortices. Finally, the volumes of planum temporale and HG were significantly larger in the left hemisphere for both groups, suggesting that leftward asymmetries within ''auditory'' cortices do not arise from experience with auditory processing.T he study of congenitally deaf adults provides a unique opportunity to investigate potential changes in neural organization and structure resulting from sensory deprivation. Animal studies have shown that congenital deafness produces degenerative changes in the central auditory pathway (1, 2). Degeneration in the central auditory system subsequent to profound hearing loss has also been reported in humans. For example, Moore and colleagues (3) observed cell size reductions in the cochlear nucleus of profoundly deaf adults. However, it is unclear whether auditory deprivation from birth results in degeneration of primary auditory cortex in either animals or humans. The pattern of subcortical projections to primary auditory cortex in congenitally deaf cats is similar to that of normally hearing cats (4, 5), suggesting that cortical auditory regions may continue to receive input from subcortical regions and might not exhibit degeneration. However, functional deficits are observed in synaptic activity and organization within auditory cortex (6), suggesting the possibility of variation in the structure of auditory cortex as a consequence of congenital deafness.We investigated whether congenital and profound hearing loss in humans results in reduced volume and͞or altered morphology of cortical brain regions involved in auditory processing. Specifically, we investigated whether lack of auditory input from birth affects gray matter (GM) and white matter (WM) volumes within primary auditory cortex [defined as the transverse gyrus of Heschl (7)], within auditory association cortex in the planum temporale (PT), or ...
Theoretical advances in language research and the availability of increasingly high-resolution experimental techniques in the cognitive neurosciences are profoundly changing how we investigate and conceive of the neural basis of speech and language processing. Recent work closely aligns language research with issues at the core of systems neuroscience, ranging from neurophysiological and neuroanatomic characterizations to questions about neural coding. Here we highlight, across different aspects of language processing (perception, production, sign language, meaning construction), new insights and approaches to the neurobiology of language, aiming to describe promising new areas of investigation in which the neurosciences intersect with linguistic research more closely than before. This paper summarizes in brief some of the issues that constitute the background for talks presented in a symposium at the annual meeting of the Society for Neuroscience. It is not a comprehensive review of any of the issues that are discussed in the symposium.
ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25–31 deaf signers, iconicity ratings from 21–37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.