ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25–31 deaf signers, iconicity ratings from 21–37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org.
Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.
Current evidence suggests that there is a difference between the representations of multimorphemic words in production and perception. In perception, it is widely believed that both whole-word and root representations exist, while in production there is little evidence for whole-word representations. The present investigation demonstrates that whole-word and root frequency independently predict the duration of words suffixed with -ing, -ed, and -s, which reveals that both root and word representations play a role in the production of inflected English words. In a second line of analysis, we find that the number of inflected phonological neighbours independently predicts the duration of monomorphemic words, which extends these results and suggests that whole-word representations exist at the lexical level. Together these results suggest that both root and word representations of inflected words are stored in the lexicon and are relevant for the production of both monomorphemic and multimorphemic words.
Previous research has shown that listeners can tell the difference between phonemically identical onsets of monomorphemic words (e.g., cap and captain) using acoustic cues (Davis, Marslen-Wilson, & Gaskell, 2002). This study investigates whether this finding extends to multimorphemic words, asking whether listeners can use phonetic information to distinguish unsuffixed from suffixed words before they differ phonemically (e.g., clue vs. clueless). We report 4 experiments investigating this issue using forced-choice identification and mouse-tracking tasks. We find that listeners are in fact able to distinguish mono- and multimorphemic words using only subphonemic information. Our experiments reveal that duration information alone is sufficient to make this discrimination and that listeners make use of an abstract rule that relates duration to morphological structure. The implications of these results for theories of morphological processing are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.