Abstract:The neuroanatomical bases of bilingualism have recently received intensive attention.However, it is still a matter of debate how the brain structure changes due to bilingual experience since current findings are highly variable. The aim of this review is to examine these structural studies from a methodological perspective and to discuss two major methodological problems that could give rise to this variability. The first problem is sample selection, an issue directly related to the heterogeneous nature of bilingualism. The second problem is the inconsistency in the methods used for the analysis of brain imaging data. This review reveals that although structural changes related to bilingualism have been reported in regions comprising language/cognitive control and language processing, these results are not yet sufficiently numerous or consistent to allow important generalizations to be reached. Consequently, current evidence offers ambiguous support for neural models of bilingualism. This shortcoming in the field is exacerbated by critical methodological differences between studies that only further complicate the matter. We conclude by identifying issues that should be taken into consideration so that studies are more comparable and results are easier to aggregate and interpret. We also point out future directions that would allow for progress in the field.
HighlightsActivation of sign language while bimodal bilinguals heard spoken words.Non-selective cross modality language activation in native and late signers.Parallel activation of the non-dominant language while using the dominant language. AbstractThis study investigates cross-language and cross-modal activation in bimodal bilinguals. Two groups of hearing bimodal bilinguals, natives (Experiment 1) and late learners (Experiment 2), for whom spoken Spanish is their dominant language and Spanish Sign Language (LSE) their non-dominant language, performed a monolingual semantic decision task with word pairs heard in Spanish. Half of the word pairs had phonologically related signed translations in LSE. The results showed that bimodal bilinguals were faster at judging semantically related words when the equivalent signed translations were phonologically related while they were slower judging semantically unrelated word pairs when the LSE translations were phonologically related. In contrast, monolingual controls with no knowledge of LSE did not show any of these effects. The results indicate cross-language and cross-modal activation of the non-dominant language in hearing bimodal bilinguals, irrespective of the age of acquisition of the signed language.
Reading typically involves phonological mediation, especially for transparent orthographies with a regular letter to sound correspondence. In this study we ask whether phonological coding is a necessary part of the reading process by examining prelingually deaf individuals who are skilled readers of Spanish. We conducted two EEG experiments exploiting the pseudohomophone effect, in which nonwords that sound like words elicit phonological encoding during reading. The first, a semantic categorization task with masked priming, resulted in modulation of the N250 by pseudohomophone primes in hearing but not in deaf readers. The second, a lexical decision task, confirmed the pattern: hearing readers had increased errors and an attenuated N400 response for pseudohomophones compared to control pseudowords, whereas deaf readers did not treat pseudohomophones any differently from pseudowords, either behaviourally or in the ERP response. These results offer converging evidence that skilled deaf readers do not rely on phonological coding during visual word recognition. Furthermore, the finding demonstrates that reading can take place in the absence of phonological activation, and we speculate about the alternative mechanisms that allow these deaf individuals to read competently.
Spoken words and signs both consist of structured sub-lexical units. While phonemes unfold in time in the case of the spoken signal, visual sub-lexical units such as location and handshape are produced simultaneously in signs. In the current study we investigate the role of sub-lexical units in lexical access in spoken Spanish and in Spanish Sign Language (LSE) in hearing early bimodal bilinguals and in hearing second language (L2) learners of LSE, both native speakers of Spanish, using the visual world paradigm. Experiment 1 investigated phonological competition in spoken Spanish from words sharing onset or rhyme. Experiment 2 investigated 2 competition in LSE from signs sharing handshape or location. For Spanish, the results confirm previous findings for word recognition: onset competition comes first and is more salient than rhyme competition. For sign recognition, native bimodal bilinguals (native speakers of spoken and signed languages) showed earlier competition from location than handshape, and overall stronger competition from handshape compared to location. Hearing bimodal bilinguals who learned LSE as a second language also experienced competition from both signed parameters. However, they showed later effects for location competitors and weaker effects for handshape competitors than native signers. Our results demonstrate that the temporal dynamics of spoken words and signs impact the time course of lexical co-activation. Furthermore, age of acquisition of the signed language modulates sub-lexical processing of signs, and may reflect enhanced abilities of native signers to use early phonological cues in transition movements to constrain sign recognition.
The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www. bcbl.eu/databases/lse/.Keywords Sign language . Lexical database . Spanish Sign Language (LSE lengua de signos española) . Stimulus material Psycholinguistic research on sign language has traditionally focused on investigating whether spoken and sign language processing are governed by similar or different cognitive mechanisms and underpinned by similar or different neuroanatomical substrates. Studies have looked into various aspects of processing in signed languages and these findings so far have shown that lexical access in signed languages is broadly affected by similar features to those in spoken languages (for an overview see Carreiras, 2010). Previous work has confirmed the fundamental distinction between form and meaning through Btip of the finger^experiences (Thompson, Emmorey, & Gollan, 2005), the role of morphological complexity (Emmorey & Corina, 1990) and of phonological parameters (Gutiérrez, Müller, Baus, & Carreiras, 2012), semantic interference effects (Baus, Gutiérrez-Sigut, Quer, & Carreiras, 2008), familiarity and phonological neighborhood (Carreiras, Gutiér-rez-Sigut, Baquero, & Corina, 2008), and cross-language interactions in bimodal bilinguals (Kubus, Villwock, Morford, & Rathmann, 2014;Morford, Kroll, Piñar, & Wilkinson, 2014;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011).While many of these findings provide parallels for what is already known about spoken languages, results that are puzzling, inconclusive or contradictory to previous findings have also been found. For instance, priming studies with sign languages have shown the expected facilitatory effect of a semantic relation (Mayberry & Witcher 2005) but not always clear effects of the phonological parameters. Phonological parameters (location, handshape, and movement) influence sign recognition in a different manner, with some parameters showing an inhibitory effect and others showing facilitation Gutierrez, Williams, Grosvald, & Corina, 2012; see also Caselli & Cohen-Goldberg, 2014, for a computational model). Furthermore, results are not consistent: for example, some studies have found location to have an inhibitory effect on lexical retrieval (Corina & Hildebrandt, 2002;Carreiras et al., 2008), while other studies have found a facilitatory effect of location combined with movement (Baus, Gutiérrez & Carreiras, 2014;Dye & Shih, 2006 results that are inconclusive or do not sit well...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.