2020
DOI: 10.1111/lang.12409
|View full text |Cite
|
Sign up to set email alerts
|

Lexical Recognition in Deaf Children Learning American Sign Language: Activation of Semantic and Phonological Features of Signs

Abstract: Children learning language efficiently process single words and activate semantic, phonological, and other features of words during recognition. We investigated lexical recognition in deaf children acquiring American Sign Language (ASL) to determine how perceiving language in the visual–spatial modality affects lexical recognition. Twenty native or early‐exposed signing deaf children (ages 4 to 8 years) participated in a visual world eye‐tracking study. Participants were presented with a single ASL sign, targe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 76 publications
1
6
0
Order By: Relevance
“…While fast‐mapping is a known and robust phenomenon among children learning spoken language (Bion et al., 2013; Carey & Bartlett, 1978; Halberda, 2003; Horst & Samuelson, 2008), the current study is the first to our knowledge to demonstrate that children can rapidly map and retain word meanings, irrespective of the sensory modality of linguistic input. Across both experiments, there were increased fixations to the target picture relative to the distractor picture at test, both in a sign recognition window that has been established in previous studies of sign recognition in deaf children (Lieberman & Borovsky, 2020; MacDonald et al., 2018), as well as in a later window. The late window provided more robust target fixations in Experiment 2, which likely reflects the more difficult nature of the task relative to Experiment 1.…”
Section: Discussionsupporting
confidence: 71%
See 2 more Smart Citations
“…While fast‐mapping is a known and robust phenomenon among children learning spoken language (Bion et al., 2013; Carey & Bartlett, 1978; Halberda, 2003; Horst & Samuelson, 2008), the current study is the first to our knowledge to demonstrate that children can rapidly map and retain word meanings, irrespective of the sensory modality of linguistic input. Across both experiments, there were increased fixations to the target picture relative to the distractor picture at test, both in a sign recognition window that has been established in previous studies of sign recognition in deaf children (Lieberman & Borovsky, 2020; MacDonald et al., 2018), as well as in a later window. The late window provided more robust target fixations in Experiment 2, which likely reflects the more difficult nature of the task relative to Experiment 1.…”
Section: Discussionsupporting
confidence: 71%
“…Children generally fixated the sign video until approximately 600 ms following sign onset, and then shifted gaze to the target picture. Following convention from previous studies of ASL lexical recognition (Lieberman & Borovsky, 2020; MacDonald et al., 2018), we analyzed looks from 600 to 2500 ms following sign onset, which we call the sign recognition window. However, given that recognition of novel signs is a more difficult task than recognition of familiar signs and thus often occurs later in the timecourse (Bion et al., 2013; Booth & Waxman, 2009; Borovsky et al., 2016; Houston‐Price et al., 2010; Mather & Plunkett, 2010), we analyzed a second, late window, from 2500 to 3500 ms following sign offset.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…A small set of studies have leveraged eye-tracking to investigate language use in deaf signers, including investigations of sign language narrative viewing (Bosworth & Stone, 2021), visual world paradigms targeting comprehension of sign, speech, and sign-supported speech (A. M. Lieberman & Borovsky, 2020; A.…”
Section: Phonological Decoding As An Early Reading Strategymentioning
confidence: 99%
“…The second experiment adapts the visual world paradigm to a signed language scenario by presenting the linguistic stimulus as a video of a sign to study the time course of co-activation of location and handshape. This experiment will also be a means of testing whether the paradigm can be adapted for sign language with competition from a single sub-lexical unit (combined sub-lexical units have been used with the visual world paradigm in Lieberman & Borovsky, 2020;Lieberman et al, 2015;Thompson et al, 2013).…”
Section: This Study: Eye-tracking and The Visual World Paradigmmentioning
confidence: 99%