Disentangling the effects of sensory and cognitive factors on neural reorganization is fundamental for establishing the relationship between plasticity and functional specialization. Auditory deprivation in humans provides a unique insight into this problem, because the origin of the anatomical and functional changes observed in deaf individuals is not only sensory, but also cognitive, owing to the implementation of visual communication strategies such as sign language and speechreading. Here, we describe a functional magnetic resonance imaging study of individuals with different auditory deprivation and sign language experience. We find that sensory and cognitive experience cause plasticity in anatomically and functionally distinguishable substrates. This suggests that after plastic reorganization, cortical regions adapt to process a different type of input signal, but preserve the nature of the computation they perform, both at a sensory and cognitive level.
Abstract& An important method for studying how the brain processes familiar stimuli is to present the same item on more than one occasion and measure how responses change with repetition. Here we use repetition priming in a sparse functional magnetic resonance imaging (fMRI) study to probe the neuroanatomical basis of spoken word recognition and the representations of spoken words that mediate repetition priming effects. Participants made lexical decisions to words and pseudowords spoken by a male or female voice that were presented twice, with half of the repetitions in a different voice. Behavioral and neural priming was observed for both words and pseudowords and was not affected by voice changes. The fMRI data revealed an elevated response to words compared to pseudowords in both posterior and anterior temporal regions, suggesting that both contribute to word recognition. Both reduced and elevated activation for second presentations (repetition suppression and enhancement) were observed in frontal and posterior regions. Correlations between behavioral priming and neural repetition suppression were observed in frontal regions, suggesting that repetition priming effects for spoken words reflect changes within systems involved in generating behavioral responses. Based on the current results, these processes are sufficiently abstract to display priming despite changes in the physical form of the stimulus and operate equivalently for words and pseudowords. &
In two experiments, Greek-English bilinguals alternated between performing a lexical decision task in Greek and in English. The cost to performance on switch trials interacted with response repetition, implying that a source of this "switch cost" is at the level of response mapping or initiation. Orthographic specificity also affected switch cost. Greek and English have partially overlapping alphabets, which enabled us to manipulate language specificity at the letter level, rather than only at the level of letter clusters. Language-nonspecific stimuli used only symbols common to both Greek and English, whereas language-specific stimuli contained letters unique to just one language. The switch cost was markedly reduced by such language-specific orthography, and this effect did not interact with the effect of response repetition, implying a separate, stimulus-sensitive source of switch costs. However, we argue that this second source is not within the word-recognition system, but at the level of task schemas, because the reduction of switch cost with language-specific stimuli was abolished when these stimuli were intermingled with language-nonspecific stimuli.
Highlights► Deaf native signers, early and late learners judged BSL sentence grammaticality. ► Early learners performed worse the later they were exposed to BSL. ► Late learners’ performance was not affected by age of learning BSL. ► Unique effect of age of learning BSL found in early learners. ► Prelingually deaf late learners may benefit from first language competence in English.
Abstract■ The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and taskrelated brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production. ■
a b s t r a c tSigned languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of British Sign Language (BSL) spotted real BSL signs embedded in nonsense-sign contexts more easily when the nonsense signs were possible BSL signs than when they were not. A control experiment showed that there were no articulatory differences between the different contexts. A second control experiment on segmentation in spoken Dutch strengthened the claim that the main BSL result likely reflects the operation of a lexical-viability constraint. It appears that signed and spoken languages, in spite of radical input differences, are segmented so as to leave no residues of the input that cannot be words.Crown
Working memory (WM) for spoken language improves when to-be-remembered items correspond to pre-existing representations in long-term memory. We investigated whether this effect generalizes to the visuospatial domain by administering a visual n-back WM task to deaf signers and hearing signers as well as hearing non-signers. There were four different kinds of stimuli: British Sign Language (BSL, familiar to the signers); Swedish Sign Language
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.