2021
DOI: 10.1016/j.heliyon.2021.e07018
|View full text |Cite
|
Sign up to set email alerts
|

The recognition of facial expressions of emotion in deaf and hearing individuals

Abstract: During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli full… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 53 publications
(93 reference statements)
1
3
0
Order By: Relevance
“…On the other hand, and in contrast to the above, it seems that once the sign language is consolidated there are increases in subdomains of some functions, for example, in peripheral attention processes, that underlie inhibitory control (Bosworth & Dobkins, 2002), this makes sense given that a visuospatial language such as sign language develops spatially, supported more by visual and proprioceptive processes. This is consistent with the Enhanced Hypothesis proposed by Rodger et al, (2021) and Sidera et al, (2017) which states that in the absence of hearing the visual system would have a more significant visual function. However, Dye, Hauser & Bavelier (2009) suggest that hearing deprivation is not a causal factor for presenting difficulties in cognitive processes, such as attention, but that these may be a consequence of other factors that have nothing to do with deafness per se since it is essential to take into account aspects such as the etiology of deafness, sociocultural factors, the age of acquisition of a communicative system, and even the modality in which this is consolidated, that is, whether it is a sign language or an oral language.…”
Section: Introductionsupporting
confidence: 91%
“…On the other hand, and in contrast to the above, it seems that once the sign language is consolidated there are increases in subdomains of some functions, for example, in peripheral attention processes, that underlie inhibitory control (Bosworth & Dobkins, 2002), this makes sense given that a visuospatial language such as sign language develops spatially, supported more by visual and proprioceptive processes. This is consistent with the Enhanced Hypothesis proposed by Rodger et al, (2021) and Sidera et al, (2017) which states that in the absence of hearing the visual system would have a more significant visual function. However, Dye, Hauser & Bavelier (2009) suggest that hearing deprivation is not a causal factor for presenting difficulties in cognitive processes, such as attention, but that these may be a consequence of other factors that have nothing to do with deafness per se since it is essential to take into account aspects such as the etiology of deafness, sociocultural factors, the age of acquisition of a communicative system, and even the modality in which this is consolidated, that is, whether it is a sign language or an oral language.…”
Section: Introductionsupporting
confidence: 91%
“…Children with hearing impairment experience a possible developmental delay in facial expression recognition ability (Most & Michaelis, 2012;Wang et al, 2011). Some believe this could be related to their language ability rather than deafness (Sidera et al, 2017), and some stated that people with hearing impairment match hearing individuals in the recognition of facial expressions of emotion (Rodger et al, 2021). However, students who struggle to initiate basic social awareness skills cannot form a fine peer relationship and/or interaction with others.…”
Section: Discussionmentioning
confidence: 99%
“…For example, the mouth gesture of a smile can be semantically aligned to nouns, adjectives, and simple verbs (Bank et al, 2011 ; Johnston et al, 2016 ), but also include ancillary emotional cues which are processed by signers perceptually. Deaf signers match hearing non-signers in the recognition of facial expression of emotion (Rodger et al, 2021 ), although for them facial expressions in the first place provide grammatical and syntactic markers. Thus, some studies argue that such gesticulations may also include ancillary emotional cues which are processed by signers perceptually.…”
Section: Reading the Face And Effects Of Face Masksmentioning
confidence: 99%