2014
DOI: 10.3389/fpsyg.2014.00428
|View full text |Cite
|
Sign up to set email alerts
|

Lexical access in sign language: a computational model

Abstract: Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

4
50
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(55 citation statements)
references
References 57 publications
4
50
1
Order By: Relevance
“…Secondly, the differentiation in acquisition of signs based on their overlapping sublexical features contribute to the small but growing field of the study of lexical access in sign language as well as concurrently reinforcing our understanding of the lexicon generally. Facilitated acquisition of signs that share handshape features relative to those with location features parallel previous findings that handshape neighbors facilitate lexical retrieval in deaf signers (Carreiras et al, 2008;Caselli & Cohen-Goldberg, 2014). In turn, these findings support previous theories that suggest the structure of the lexicon itself influence both first and second language acquisition.…”
Section: Discussionsupporting
confidence: 80%
See 3 more Smart Citations
“…Secondly, the differentiation in acquisition of signs based on their overlapping sublexical features contribute to the small but growing field of the study of lexical access in sign language as well as concurrently reinforcing our understanding of the lexicon generally. Facilitated acquisition of signs that share handshape features relative to those with location features parallel previous findings that handshape neighbors facilitate lexical retrieval in deaf signers (Carreiras et al, 2008;Caselli & Cohen-Goldberg, 2014). In turn, these findings support previous theories that suggest the structure of the lexicon itself influence both first and second language acquisition.…”
Section: Discussionsupporting
confidence: 80%
“…Previously, greater facilitative effects for handshape during lexical sign retrieval have been found, but greater inhibition for location features (Carreiras et al, 2008;Caselli & Cohen-Goldberg, 2014;Emmorey & Corina, 1990). Caselli & Cohen-Goldberg (2014) simulated computational activation of the sign lexicon and concluded that handshape neighbors have lower resting state activation and introduce inhibitory input for less time relative to location neighbors. These findings might be applicable to sign language learning as well insofar as decreased inhibition from sign neighbors relative to location neighbors aids in the acquisition of signs.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, priming studies with sign languages have shown the expected facilitatory effect of a semantic relation (Mayberry & Witcher 2005) but not always clear effects of the phonological parameters. Phonological parameters (location, handshape, and movement) influence sign recognition in a different manner, with some parameters showing an inhibitory effect and others showing facilitation Gutierrez, Williams, Grosvald, & Corina, 2012; see also Caselli & Cohen-Goldberg, 2014, for a computational model). Furthermore, results are not consistent: for example, some studies have found location to have an inhibitory effect on lexical retrieval (Corina & Hildebrandt, 2002;Carreiras et al, 2008), while other studies have found a facilitatory effect of location combined with movement (Baus, Gutiérrez & Carreiras, 2014;Dye & Shih, 2006).…”
mentioning
confidence: 99%