HighlightsActivation of sign language while bimodal bilinguals heard spoken words.Non-selective cross modality language activation in native and late signers.Parallel activation of the non-dominant language while using the dominant language. AbstractThis study investigates cross-language and cross-modal activation in bimodal bilinguals. Two groups of hearing bimodal bilinguals, natives (Experiment 1) and late learners (Experiment 2), for whom spoken Spanish is their dominant language and Spanish Sign Language (LSE) their non-dominant language, performed a monolingual semantic decision task with word pairs heard in Spanish. Half of the word pairs had phonologically related signed translations in LSE. The results showed that bimodal bilinguals were faster at judging semantically related words when the equivalent signed translations were phonologically related while they were slower judging semantically unrelated word pairs when the LSE translations were phonologically related. In contrast, monolingual controls with no knowledge of LSE did not show any of these effects. The results indicate cross-language and cross-modal activation of the non-dominant language in hearing bimodal bilinguals, irrespective of the age of acquisition of the signed language.
Spoken words and signs both consist of structured sub-lexical units. While phonemes unfold in time in the case of the spoken signal, visual sub-lexical units such as location and handshape are produced simultaneously in signs. In the current study we investigate the role of sub-lexical units in lexical access in spoken Spanish and in Spanish Sign Language (LSE) in hearing early bimodal bilinguals and in hearing second language (L2) learners of LSE, both native speakers of Spanish, using the visual world paradigm. Experiment 1 investigated phonological competition in spoken Spanish from words sharing onset or rhyme. Experiment 2 investigated 2 competition in LSE from signs sharing handshape or location. For Spanish, the results confirm previous findings for word recognition: onset competition comes first and is more salient than rhyme competition. For sign recognition, native bimodal bilinguals (native speakers of spoken and signed languages) showed earlier competition from location than handshape, and overall stronger competition from handshape compared to location. Hearing bimodal bilinguals who learned LSE as a second language also experienced competition from both signed parameters. However, they showed later effects for location competitors and weaker effects for handshape competitors than native signers. Our results demonstrate that the temporal dynamics of spoken words and signs impact the time course of lexical co-activation. Furthermore, age of acquisition of the signed language modulates sub-lexical processing of signs, and may reflect enhanced abilities of native signers to use early phonological cues in transition movements to constrain sign recognition.
This study investigated whether language control during language production in bilinguals generalizes across modalities, and to what extent the language control system is shaped by competition for the same articulators. Using a cued language-switching paradigm, we investigated whether switch costs are observed when hearing signers switch between a spoken and a signed language. The results showed an asymmetrical switch cost for bimodal bilinguals on reaction time and accuracy, with larger costs for the (dominant) spoken language. Our findings suggest important similarities in the mechanisms underlying language selection in bimodal bilinguals and unimodal bilinguals, with competition occurring at multiple levels other than phonology.
We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish–Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from within-language coactivation. The results revealed cross-language, cross-modal activation in both directions. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. Spoken word activation follows the temporal structure of that word only when the word itself is heard; for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)—even when the sign is seen. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. Examining languages across modalities illustrates how this interaction impacts language processing.
Manuel Carreiras, for raising the level of research related to Spanish Sign Language to the scientific equivalent of the "champions league", to paraphrase what he once told me, and for allowing me to be part of his team; and Dr. Brendan Costello for his invaluable support. Words alone cannot express all that this work owes him as a supervisor and how much I value him as a friend.The rest of the sign language team members at BCBL have also left their imprint on this thesis. Thanks to Noemi Fariña and Miguel Ángel Sampedro. Special thanks to Marcel Giezen for being such an inspiration and model to follow in research. Much love to Patricia Dias for becoming such a close friend. We have been through so much together! I am grateful to the organizations that provided staff and premises to run the experiments: CILSEM (Sign Language Interpreters Association in Madrid), ASORMADRID (Deaf Association in Madrid), Fundación CNSE (Madrid), APERSORVA (Deaf Association in Valladolid), ARANSBUR (Association of Families with Deaf Children, Burgos), APSBU (Deaf Association in Burgos), ASORNA (Deaf Association in Navarra). The University of Valladolid and the López Vicuña Vocational Institute (Palencia) have provided spaces in which to run the experiments.Thanks to Ainhoa Ruiz de Angulo for recording the signs, and for David Carcedo for his Spanish and Basque words. I am truly indebted to all the participants. Meeting them all has been one of the highlights of this study.Thanks to everyone at the BCBL for providing the perfect environment to learn how to do science and for sharing all your knowledge. I would like to mention the predoc community and especially the "Postmodern Group" for being the best companions I could
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.