2017
DOI: 10.1371/journal.pone.0185821
|View full text |Cite
|
Sign up to set email alerts
|

Multisensory emotion perception in congenitally, early, and late deaf CI users

Abstract: Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a differen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

5
8
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(14 citation statements)
references
References 51 publications
5
8
1
Order By: Relevance
“…The current findings are consistent with those reported in a recent investigation examining adolescent and adult CI users’ closed set emotion recognition from multisensory talker information ( Fengler et al., 2017 ). While CI users were less accurate than hearing controls in recognizing emotions from prosodic vocal cues alone, they relied more strongly on visual facial expressions as indicated by a cost to performance when visual facial cues were incongruent with the vocal expressive cues.…”
Section: Discussionsupporting
confidence: 92%
See 2 more Smart Citations
“…The current findings are consistent with those reported in a recent investigation examining adolescent and adult CI users’ closed set emotion recognition from multisensory talker information ( Fengler et al., 2017 ). While CI users were less accurate than hearing controls in recognizing emotions from prosodic vocal cues alone, they relied more strongly on visual facial expressions as indicated by a cost to performance when visual facial cues were incongruent with the vocal expressive cues.…”
Section: Discussionsupporting
confidence: 92%
“…Our findings are also consistent with those observed by Fengler et al. (2017) in that the ceiling performance seen across spectral band conditions is indicative that multisensory emotion perception is dominated by the visual modality in actual CI users.…”
Section: Discussionsupporting
confidence: 92%
See 1 more Smart Citation
“…Of interest, while CI patients present better lip-reading abilities than hearing controls ( Calvert et al, 1997 ;Rouger et al, 2007 ;Strelnikov et al, 2009 ), they are surprisingly not better, and even worse at discriminating visual prosodic cues. A similar trend has been reported in CI patients' visual discrimination ability for visual emotional prosodic information ( Fengler et al, 2017 ). To our knowledge, there was no evidence of a lower proficiency of CI patients in any visual processing.…”
Section: Patients Are Highly Efficient For Integrating Visuo-auditory Prosodic Informationsupporting
confidence: 89%
“…Thus, CI patients may have developed specific skills to combine visual and auditory prosodic cues as they participate directly in speech content comprehension. However, it remains unclear why the discrimination of the emotional content in a sentence is improved in multimodal conditions similarly in CI patients and hearing participants ( Fengler et al, 2017 ) as it relies on similar acoustic changes.…”
Section: Patients Are Highly Efficient For Integrating Visuo-auditory Prosodic Informationmentioning
confidence: 99%