2018
DOI: 10.1097/aud.0000000000000546
|View full text |Cite
|
Sign up to set email alerts
|

Children’s Recognition of Emotional Prosody in Spectrally Degraded Speech Is Predicted by Their Age and Cognitive Status

Abstract: These results indicate that cognitive function and age play important roles in children's ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children's age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

7
23
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(33 citation statements)
references
References 36 publications
7
23
1
Order By: Relevance
“…Some specific neurocognitive skills, such as working memory capacity, appear to be more strongly related to speech perception and recognition and may be more strongly related than more general cognitive abilities, such as nonverbal intelligence (for a review, see Akeroyd) 1 . Working memory (Lyxell et al; 29 Tao et al) 52 as well as inhibitory control (Moberly et al 31 ), verbal learning and memory (Pisoni et al) 25 , and processing speed (Tinnemore et al) 53 have been linked to individual differences in speech recognition among adult CI users. In addition, although a strong relation has not been established, nonverbal reasoning skills have recently been found to be associated with individual performance among postlingually deafened adult CI users, independently of age (Mattingly et al) 30 .…”
Section: Introductionmentioning
confidence: 99%
“…Some specific neurocognitive skills, such as working memory capacity, appear to be more strongly related to speech perception and recognition and may be more strongly related than more general cognitive abilities, such as nonverbal intelligence (for a review, see Akeroyd) 1 . Working memory (Lyxell et al; 29 Tao et al) 52 as well as inhibitory control (Moberly et al 31 ), verbal learning and memory (Pisoni et al) 25 , and processing speed (Tinnemore et al) 53 have been linked to individual differences in speech recognition among adult CI users. In addition, although a strong relation has not been established, nonverbal reasoning skills have recently been found to be associated with individual performance among postlingually deafened adult CI users, independently of age (Mattingly et al) 30 .…”
Section: Introductionmentioning
confidence: 99%
“…What we know is that despite CI users’ significant accomplishments in decoding spoken speech from restricted cues ( Shannon et al., 1995 ), discerning the emotional meaning conveyed in the prosody of speech remains challenging by comparison ( Tinnemore et al., 2018 ). While developments to accurately encode a wider array of acoustic features for CI users are underway, enhancing communication outcomes using perceptual and cognitive resources is the focus of the present research.…”
mentioning
confidence: 99%
“…The results suggest that the strong pitch salience available to NH listeners generated a more robust representation of emotion cues enabling them to be more tolerant of the acoustic features that are obscured by spectral degradation. Other factors including better linguistic and cognitive abilities in adults than in children have been shown to increase accuracy in identifying emotions from spectrally degraded speech by ( Tinnemore et al., 2018 ).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Most studies investigating voice perception in CI users found general deficitssometimes with substantial individual differences -in the ability to perceive voice gender (e.g., Fu, Chinchilla, & Galvin, 2004;Fu, Chinchilla, Nogaki, & Galvin, 2005;Fuller et al, 2014;Gaudrain & Baskent, 2018;Hazrati, Ali, Hansen, & Tobey, 2015;Kovacic & Balaban, 2009Li & Fu, 2011;Massida et al, 2013;Meister, Fursen, Streicher, Lang-Roth, & Walger, 2016;Meister, Landwehr, Pyschny, Walger, & von Wedel, 2009) or emotion (e.g., Agrawal et al, 2013;Jiam, Caldwell, Deroche, Chatterjee, & Limb, 2017;Kim & Yoon, 2018;Paquette et al, 2018;Schorr et al, 2009;Tinnemore, Zion, Kulkarni, & Chatterjee, 2018;Waaramaa, Kukkonen, Mykkanen, & Geneid, 2018). Few studies reported impairments in other aspects of voice perception, such as speaker familiarity or identity (e.g., Gonzalez & Oliver, 2005;Muhler, Ziese, & Verhey, 2017;Vongphoe & Zeng, 2005).…”
Section: Introductionmentioning
confidence: 99%