The possibility that, following early auditory deprivation, the remaining senses such as vision are enhanced has been met with much excitement. However, deaf individuals exhibit both better and worse visual skills than hearing controls. We show that, when deafness is considered to the exclusion of other confounds, enhancements in visual cognition are noted. The changes are not, however, widespread but are selective, limited, as we propose, to those aspects of vision that are attentionally demanding and would normally benefit from auditory-visual convergence. The behavioral changes are accompanied by a reorganization of multisensory areas, ranging from higherorder cortex to early cortical areas, highlighting cross-modal interactions as a fundamental feature of brain organization and cognitive processing.
BackgroundEarly deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals.Methodology/Principal FindingsWe employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age.Conclusions/SignificanceThis work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task.
Deaf children have been characterized as being impulsive, distractible, and unable to sustain attention. However, past research has tested deaf children born to hearing parents who are likely to have experienced language delays. The purpose of this study was to determine whether an absence of auditory input modulates attentional problems in deaf children with no delayed exposure to language. Two versions of a continuous performance test were administered to 37 deaf children born to Deaf parents and 60 hearing children, all aged 6–13 years. A vigilance task was used to measure sustained attention over the course of several minutes, and a distractibility test provided a measure of the ability to ignore task irrelevant information – selective attention. Both tasks provided assessments of cognitive control through analysis of commission errors. The deaf and hearing children did not differ on measures of sustained attention. However, younger deaf children were more distracted by task-irrelevant information in their peripheral visual field, and deaf children produced a higher number of commission errors in the selective attention task. It is argued that this is not likely to be an effect of audition on cognitive processing, but may rather reflect difficulty in endogenous control of reallocated visual attention resources stemming from early profound deafness.
Learners' ability to recognize linguistic contrasts in American Sign Language (ASL) was investigated using a paired-comparison discrimination task. Minimal pairs containing contrasts in five linguistic categories (i.e., the formational parameters of movement, handshape, orientation, and location in ASL phonology, and a category comprised of contrasts in complex morphology) were presented in sentence contexts to a sample of 127 hearing learners at beginning and intermediate levels of proficiency and 10 Deaf native signers. Participants' responses were analyzed to determine the relative difficulty of the linguistic categories and the effect of proficiency level on performance. The results indicated that movement contrasts were the most difficult and location contrasts the easiest, with the other categories of stimuli of intermediate difficulty. These findings have implications for language learning in situations in which the first language is a spoken language and the second language (L2) is a signed language. In such situations, the construct of language transfer does not apply to the acquisition of L2 phonology because of fundamental differences between the phonological systems of signed and spoken languages, which are associated with differences between the modalities of speech and sign.The authors are grateful to Tom Weymann and Sarah Schley for their assistance in conducting the data analysis, Gaurav Mathur and the anonymous reviewers for their thoughtful comments and helpful suggestions, and Joe Hamilton, Jenamarie Bacot, and Jon Lejeune for helping to develop and record the stimuli.
An important question in understanding language processing is whether there are distinct neural mechanisms for processing specific types of grammatical structure, such as syntax versus morphology, and, if so, what the basis of the specialization might be. However, this question is difficult to study: A given language typically conveys its grammatical information in one way (e.g., English marks "who did what to whom" using word order, and German uses inflectional morphology). American Sign Language permits either device, enabling a direct within-language comparison. During functional (f)MRI, native signers viewed sentences that used only word order and sentences that included inflectional morphology. The two sentence types activated an overlapping network of brain regions, but with differential patterns. Word order sentences activated left-lateralized areas involved in working memory and lexical access, including the dorsolateral prefrontal cortex, the inferior frontal gyrus, the inferior parietal lobe, and the middle temporal gyrus. In contrast, inflectional morphology sentences activated areas involved in building and analyzing combinatorial structure, including bilateral inferior frontal and anterior temporal regions as well as the basal ganglia and medial temporal/limbic areas. These findings suggest that for a given linguistic function, neural recruitment may depend upon on the cognitive resources required to process specific types of linguistic cues.brain | language | sign language | syntax | neuroimaging D espite the great diversity of human languages, the neural basis of language processing has been documented in only a very few languages. To the extent that the neural underpinnings of language are truly universal, the available research may reflect quite accurately what would be found across languages. Alternatively, the fact that the grammars of different languages encode information in different ways may place different processing demands on the neurocognitive systems supporting language (1). For example, in English the order of the words in the sentence John gave his lunch to Mary encodes the grammatical "dependency relationships"-essentially, who did what to whom. Ordering the words differently would convey a different meaning or no meaning at all. In other languages such as German or American Sign Language, word order is less restricted because dependency relationships can be marked by other cues, such as tagging words with inflectional morphemes (e.g., in German, suffixes are added to words within the noun phrase to mark the noun's "case" or role in the sentence, e.g., as "doer" or "receiver" of an action). Whether these different strategies for encoding grammatical information rely on a unitary network of brain regions specialized for processing "grammar" in a broad sense, or whether they impose distinct processing demands relying on nonidentical neural mechanisms, is a fundamental question. It has implications for our understanding of the neurocognition of language, for its relationship to other ...
What are Che experiences of Deaf and hard-of-hearing students in applying for predoclora! internships in professional psychology? Are internship programs aware of accessibility issues in regard to these applicants? Federal laws, accreditation guidelines of the American Psychological Association, and rules of the Association of Psychology Postdoctoral and Internship Centers require that internship training programs provide access for interns with disabilities. Compliance with these requirements is still evolving, however. Several recent examples of violations are outlined, and specific laws and ethical issues involved are discussed. Internship training centers must have information on their obligations regarding the provision of accessible services to Deaf and hard-of-heaiing trainees, the adverse impact on applicants of certain interview questions and comments, and ways to provide equal access to training for qualified Deaf and hard-of-hearing students.Internship is a required part of any clinical psychology training program. Without approved internship training, it is impossible to obtain a doctoral degree from an accredited program or to gain entry to state licensing examinations and become licensed as a clinical psychologist. The fact that the internship application process has been described as an arduous experience for any clinical psychology student is well documented (e.g., Oehlert, Lopez, &
Short-term memory (STM), or the ability to hold verbal information in mind for a few seconds, is known to rely on the integrity of a frontoparietal network of areas. Here, we used functional magnetic resonance imaging to ask whether a similar network is engaged when verbal information is conveyed through a visuospatial language, American Sign Language, rather than speech. Deaf native signers and hearing native English speakers performed a verbal recall task, where they had to first encode a list of letters in memory, maintain it for a few seconds, and finally recall it in the order presented. The frontoparietal network described to mediate STM in speakers was also observed in signers, with its recruitment appearing independent of the modality of the language. This finding supports the view that signed and spoken STM rely on similar mechanisms. However, deaf signers and hearing speakers differentially engaged key structures of the frontoparietal network as the stages of STM unfold. In particular, deaf signers relied to a greater extent than hearing speakers on passive memory storage areas during encoding and maintenance, but on executive process areas during recall. This work opens new avenues for understanding similarities and differences in STM performance in signers and speakers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.