ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25–31 deaf signers, iconicity ratings from 21–37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org.
Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture. Despite the need for deep interdisciplinary knowledge, existing research occurs in separate disciplinary silos, and tackles separate portions of the sign language processing pipeline. This leads to three key questions: 1) What does an interdisciplinary view of the current landscape reveal? 2) What are the biggest challenges facing the field? and 3) What are the calls to action for people working in the field? To help answer these questions, we brought together a diverse group of experts for a two-day workshop. This paper presents the results of that interdisciplinary workshop, providing key background that is often overlooked by computer scientists, a review of the state-of-the-art, a set of pressing challenges, and a call to action for the research community.Each group focused on the following questions:
Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.
Iconic mappings between words and their meanings are far more prevalent than once estimated, and seem to support children’s acquisition of new words, spoken or signed. We asked whether iconicity’s prevalence in sign language overshadows other factors known to support spoken vocabulary development, including neighborhood density (the number of lexical items phonologically similar to the target), and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children’s American Sign Language (ASL) productive acquisition of 332 signs (Anderson & Reilly, 2002), and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage them to expand their vocabulary.
Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.
Objective To examine whether children who are deaf or hard of hearing who have hearing parents can develop age-level vocabulary skills when they have early exposure to a sign language. Study designThis cross-sectional study of vocabulary size included 78 children who are deaf or hard of hearing between 8 and 68 months of age who were learning American Sign Language (ASL) and had hearing parents. Children who were exposed to ASL before 6 months of age or between 6 and 36 months of age were compared with a reference sample of 104 deaf and hard of hearing children who have parents who are deaf and sign.Results Deaf and hard of hearing children with hearing parents who were exposed to ASL in the first 6 months of life had age-expected receptive and expressive vocabulary growth. Children who had a short delay in ASL exposure had relatively smaller expressive but not receptive vocabulary sizes, and made rapid gains.Conclusions Although hearing parents generally learn ASL alongside their children who are deaf, their children can develop age-expected vocabulary skills when exposed to ASL during infancy. Children who are deaf with hearing parents can predictably and consistently develop age-level vocabularies at rates similar to native signers; early vocabulary skills are robust predictors of development across domains.
Current evidence suggests that there is a difference between the representations of multimorphemic words in production and perception. In perception, it is widely believed that both whole-word and root representations exist, while in production there is little evidence for whole-word representations. The present investigation demonstrates that whole-word and root frequency independently predict the duration of words suffixed with -ing, -ed, and -s, which reveals that both root and word representations play a role in the production of inflected English words. In a second line of analysis, we find that the number of inflected phonological neighbours independently predicts the duration of monomorphemic words, which extends these results and suggests that whole-word representations exist at the lexical level. Together these results suggest that both root and word representations of inflected words are stored in the lexicon and are relevant for the production of both monomorphemic and multimorphemic words.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.