Facial expressions are used by humans to convey various types of meaning in various contexts. The range of meanings spans basic possibly innate socio-emotional concepts such as “surprise” to complex and culture specific concepts such as “carelessly.” The range of contexts in which humans use facial expressions spans responses to events in the environment to particular linguistic constructions within sign languages. In this mini review we summarize findings on the use and acquisition of facial expressions by signers and present a unified account of the range of facial expressions used by referring to three dimensions on which facial expressions vary: semantic, compositional, and iconic.
There is an ongoing debate whether deaf individuals access phonology when reading, and if so, what impact the ability to access phonology might have on reading achievement. However, the debate so far has been theoretically unspecific on two accounts: (a) the phonological units deaf individuals may have of oral language have not been specified and (b) there seem to be no explicit cognitive models specifying how phonology and other factors operate in reading by deaf individuals. We propose that deaf individuals have representations of the sublexical structure of oral-aural language which are based on mouth shapes and that these sublexical units are activated during reading by deaf individuals. We specify the sublexical units of deaf German readers as 11 "visemes" and incorporate the viseme set into a working model of single-word reading by deaf adults based on the dual-route cascaded model of reading aloud by Coltheart, Rastle, Perry, Langdon, and Ziegler (2001. DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204-256. doi: 10.1037//0033-295x.108.1.204). We assessed the indirect route of this model by investigating the "pseudo-homoviseme" effect using a lexical decision task in deaf German reading adults. We found a main effect of pseudo-homovisemy, suggesting that at least some deaf individuals do automatically access sublexical structure during single-word reading.
In this study, we verify the observation that signs for emotion related concepts are articulated with the congruent facial movements in German Sign Language using a corpus. We propose an account for the function of these facial movements in the language that also explains the function of mouthings and other facial movements at the lexical level. Our data, taken from 20 signers in three different conditions, show that for the disgust related signs, a disgust related facial movement with temporal scope only over the individual sign occurred in most cases. These movements often occurred in addition to disgust related facial movements that had temporal scope over the entire clause. Using the Facial Action Coding System, we found some variability in how exactly the facial movement was instantiated, but most commonly, it consisted of tongue protrusion and an open mouth. We propose that these lexically related facial movements be regarded as an additional layer of communication with both phonological and morphological properties, and we extend this proposal to mouthings as well. The relationship between this layer and manual lexical items is analogous in some ways to the gesture-word relationship, and the intonation-word relationship.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.