Bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives. The regular need to select a target language is argued to enhance executive control. We investigated whether this enhancement stems from a general effect of bilingualism (the representation of two languages) or from a modality constraint that forces language selection. Bimodal bilinguals can, but do not always, sign and speak at the same time. Their two languages involve distinct motor and perceptual systems, leading to weaker demands on language control. We compared the performance of 15 monolinguals, 15 bimodal bilinguals, and 15 unimodal bilinguals on a set of flanker tasks. There were no group differences in accuracy, but unimodal bilinguals were faster than the other groups; bimodal bilinguals did not differ from monolinguals. These results trace the bilingual advantage in cognitive control to the unimodal bilingual's experience controlling two languages in the same modality.A growing number of studies have reported advantages in nonverbal executive control tasks for bilingual children (Bialystok, 2001;Carlson & Meltzoff, 2008;Mezzacappa, 2004) and adults (Bialystok, Craik, Klein, & Viswanathan, 2004;Bialystok, Craik, & Ryan, 2006;Costa, Hernandez, & Sebastián-Gallés, 2008). One explanation for this enhancement is that the regular use of two languages requires a mechanism to control attention and select the target language-an experience that may enhance a general control mechanism. Evidence from neuroimaging and patient studies suggests that the same neural regions (e.g., dorsolateral prefrontal and anterior cingulate cortices) are engaged during both languageswitching tasks and nonverbal control tasks, supporting the interpretation that the mechanism for language control and selection is domain general (Fabbro, Skrap, & Aglioti, 2000;Fan, Flombaum, McCandliss, Thomas, & Posner, 2003;Hernandez, Dapretto, Mazziotta, & Bookheimer, 2001;Rodriguez-Fornells et al., 2005).We investigate whether the bilingual advantage in executive control stems from the conflict that arises from the need to select only one language for production or from the bilingual's representation of two language systems. Bilinguals who know two spoken languages (unimodal bilinguals) cannot produce two words at the same time; that is, they cannot simultaneously say dog and perro. In contrast, bimodal bilinguals who know both a spoken and a signed language can produce lexical items from both languages at the same time (Emmorey, Borinstein, Thompson, & Gollan, 2008 In contrast to this view, the bilingual advantage could follow from a modality-independent effect of having two language representational systems. Bilinguals are well-practiced and experienced with coding a single lexical concept in two languages. Consistent with this experience, bilingual children show enhancements on dimensional card-sorting tasks that require the same concept to be re-coded in a different way (Bialystok, 1999;Bialystok & Martin, 2004) an...
Although spatial language and spatial cognition covary over development and across languages, determining the causal direction of this relationship presents a challenge. Here we show that mature human spatial cognition depends on the acquisition of specific aspects of spatial language. We tested two cohorts of deaf signers who acquired an emerging sign language in Nicaragua at the same age but during different time periods: the first cohort of signers acquired the language in its infancy, and 10 y later the second cohort of signers acquired the language in a more complex form. We found that the second-cohort signers, now in their 20s, used more consistent spatial language than the first-cohort signers, now in their 30s. Correspondingly, they outperformed the first cohort in spatially guided searches, both when they were disoriented and when an array was rotated. Consistent linguistic marking of left-right relations correlated with search performance under disorientation, whereas consistent marking of ground information correlated with search in rotated arrays. Human spatial cognition therefore is modulated by the acquisition of a rich language.spatial language | language and thought | Nicaraguan Sign Language
Iconic mappings between words and their meanings are far more prevalent than once estimated, and seem to support children’s acquisition of new words, spoken or signed. We asked whether iconicity’s prevalence in sign language overshadows other factors known to support spoken vocabulary development, including neighborhood density (the number of lexical items phonologically similar to the target), and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children’s American Sign Language (ASL) productive acquisition of 332 signs (Anderson & Reilly, 2002), and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage them to expand their vocabulary.
Developmental studies have identified a strong correlation in the timing of language development and false-belief understanding. However, the nature of this relationship remains unresolved. Does language promote false-belief understanding, or does it merely facilitate development that could occur independently, albeit on a delayed timescale? We examined language development and false-belief understanding in deaf learners of an emerging sign language in Nicaragua. The use of mental-state vocabulary and performance on a low-verbal false-belief task were assessed, over 2 years, in adult and adolescent users of Nicaraguan Sign Language. Results show that those adults who acquired a nascent form of the language during childhood produce few mental-state signs and fail to exhibit false-belief understanding. Furthermore, those whose language developed over the period of the study correspondingly developed in false-belief understanding. Thus, language learning, over and above social experience, drives the development of a mature theory of mind.The capacity to infer other people's mental states, and to use this information to predict behavior, is a central cognitive ability that emerges early in human development. By the age of 2, children demonstrate some implicit understanding of what others believe (Clements & Perner, 1994;Onishi & Baillargeon, 2005;Southgate & Csibra, 2007;Surian, Caldi, & Sperber, 2007), yet they do not reliably use such understanding to explicitly predict others' behavior until 2 years later (Wellman, Cross, & Watson, 2001). Indeed, some researchers have proposed that an explicit understanding of others' false beliefs requires particular linguistic experience (Milligan, Astington, & Dack, 2007;Perner & Ruffman, 2005). If so, what would happen if the relevant language exposure were unavailable until adulthood? Can other life experience support the representation of false belief?Previous studies have found that the timing of false-belief understanding depends on language in both typically developing and language-delayed children (for a review, see Milligan et al., 2007). However, in this research, language development and life experience have necessarily been conflated; both correlate with educational experience, socioeconomic status, and, most critically, age. Consequently, the nature of the link between language and false-belief understanding remains unresolved. Are particular language milestones prerequisite for false-belief understanding, or do language abilities merely facilitate the development of a theory of mind, a domain of cognition that could mature independently, albeit on a delayed timescale? We examined these questions with a population of adults with minimal language exposure during childhood. Because Nicaraguan Sign Language (NSL) emerged only recently, deaf Nicaraguan adults provide a natural opportunity to disentangle language exposure and life experience. NSL first appeared in the 1970s among deaf children entering special-education schools (Kegl, Senghas, & Coppola, 1999;Po...
Bilinguals report more tip-of-the-tongue (TOT) failures than monolinguals. Three accounts of this disadvantage are that bilinguals experience between-language interference at (a) semantic and/or (b) phonological levels, or (c) that bilinguals use each language less frequently than monolinguals. Bilinguals who speak one language and sign another help decide between these alternatives because their languages lack phonological overlap. Twenty-two American Sign Language (ASL)-English bilinguals, 22 English monolinguals, and 11 Spanish-English bilinguals named 52 pictures in English. Despite no phonological overlap between languages, ASL-English bilinguals had more TOTs than monolinguals, and equivalent TOTs as Spanish-English bilinguals. These data eliminate phonological blocking as the exclusive source of bilingual disadvantages. A small advantage of ASLEnglish over Spanish-English bilinguals in correct retrievals is consistent with semantic interference and a minor role for phonological blocking. However, this account faces substantial challenges. We argue reduced frequency-of-use is the more comprehensive explanation of TOT rates in all bilinguals.All language users report occasional difficulty retrieving words they are sure they know (R. Brown & McNeill, 1966; A. S. Brown, 1991;Schwartz, 1999). Such experiences have been called tip-of-the-tongue (TOT) states for speakers and tip-of-the-fingers (TOF) states for signers (Thompson, Emmorey, & Gollan, 2005). TOTs offer an opportunity to view the mechanisms of language production under a magnifying glass by illuminating points of weakness in the system. Signers and speakers experiencing a TOF/TOT often retrieve meaningrelated alternative words (e.g. hyena for scavenger), and also form-related alternatives (e.g., scaffolding) suggesting separate access stages for meaning and for form in language production (e.g., Bock & Levelt, 1994). Bilinguals with two spoken languages, unimodal bilinguals, experience significantly more TOTs than monolinguals, suggesting that the mechanism underlying TOTs is sensitive to the existence of two lexicons, two phonological systems, or both (Gollan & Acenas, 2004;Gollan & Silverberg, 2001). Evidence from bilinguals who are fluent in a spoken and a signed language, bimodal bilinguals, can help differentiate between accounts of the increased TOT rate in bilinguals and of the TOT phenomenon itself.The activation of form-related words during TOTs led to perhaps the most intuitive account of the TOT phenomenon, the "phonological blocking" hypothesis. On this view, TOTs arise Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that ap...
Two populations have been found to exhibit delays in theory of mind (ToM): deaf children of hearing parents and children with autism spectrum disorder (ASD). Deaf children exposed to sign from birth by their deaf parents, however, show no such delay, suggesting that early language exposure is key to ToM development. Sign languages also present frequent opportunities with visual perspective-taking (VPT), leading to the question of whether sign exposure could benefit children with ASD. We present the first study of children with ASD exposed to sign from birth by their deaf parents. Seventeen native-signing children with a confirmed ASD diagnosis and a chronological- and mental age-matched control group of 18 typically developing (TD) native-signing deaf children were tested on American Sign Language (ASL) comprehension, two minimally verbal social cognition tasks (ToM and VPT), and one spatial cognition task (mental rotation). The TD children outperformed the children with ASD on ASL comprehension (p < 0.0001), ToM (p = 0.02), and VPT (p < 0.01), but not mental rotation (p = 0.12). Language strongly correlated with ToM (p < 0.01) and VPT (p < 0.001), but not mental rotation (p = ns). Native exposure to sign is thus insufficient to overcome the language and social impairments implicated in ASD. Contrary to the hypothesis that sign could provide a scaffold for ToM skills, we find that signing children with ASD are unable to access language so as to gain any potential benefit sign might confer. Our results support a strong link between the development of social cognition and language, regardless of modality, for TD and ASD children. Autism Res 2016, 9: 1304-1315. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Objective To examine whether children who are deaf or hard of hearing who have hearing parents can develop age-level vocabulary skills when they have early exposure to a sign language. Study designThis cross-sectional study of vocabulary size included 78 children who are deaf or hard of hearing between 8 and 68 months of age who were learning American Sign Language (ASL) and had hearing parents. Children who were exposed to ASL before 6 months of age or between 6 and 36 months of age were compared with a reference sample of 104 deaf and hard of hearing children who have parents who are deaf and sign.Results Deaf and hard of hearing children with hearing parents who were exposed to ASL in the first 6 months of life had age-expected receptive and expressive vocabulary growth. Children who had a short delay in ASL exposure had relatively smaller expressive but not receptive vocabulary sizes, and made rapid gains.Conclusions Although hearing parents generally learn ASL alongside their children who are deaf, their children can develop age-expected vocabulary skills when exposed to ASL during infancy. Children who are deaf with hearing parents can predictably and consistently develop age-level vocabularies at rates similar to native signers; early vocabulary skills are robust predictors of development across domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.