2013
DOI: 10.3109/02699052.2012.740648
|View full text |Cite
|
Sign up to set email alerts
|

A resource of validated digital audio recordings to assess identification of emotion in spoken language after a brain injury

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
10
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 5 publications
2
10
0
Order By: Relevance
“…(A) Identification: forensic PwS were able to identify spoken emotions, yet their emotional-discriminability was poorer than that of controls; (B) Selective-attention: forensic PwS' performance indicated larger failures of selective-attention than their peers; and (C) Integration: forensic PwS integrated the prosodic and semantic channels in the same fashion as controls. Namely, both groups similarly gave prominence to the prosodic information over the semantic one (prosodic dominance), a marker of typical spoken-emotion processing, as found already in various studies ( 24 28 ).…”
Section: Introductionsupporting
confidence: 75%
“…(A) Identification: forensic PwS were able to identify spoken emotions, yet their emotional-discriminability was poorer than that of controls; (B) Selective-attention: forensic PwS' performance indicated larger failures of selective-attention than their peers; and (C) Integration: forensic PwS integrated the prosodic and semantic channels in the same fashion as controls. Namely, both groups similarly gave prominence to the prosodic information over the semantic one (prosodic dominance), a marker of typical spoken-emotion processing, as found already in various studies ( 24 28 ).…”
Section: Introductionsupporting
confidence: 75%
“…Of the 23 papers collected in our meta‐analysis, 16 presented high‐level semantic content (sentences) that may call for integration across auditory channels (semantic and prosodic). Identification of spoken emotions is especially challenging when the emotions presented by the semantics and the prosody do not match (i.e., the identification of irony, Ben‐David, van Lieshout, & Leszcz, 2011; Ben‐David et al, 2013). For example, Ben‐David, Ben‐Itzchak, et al (2020) presented stimuli like “I won the Lottery today” spoken with sad prosody, and asked participants to focus on the emotional prosodic content (in this example, sadness), while ignoring the emotional semantic content (happiness).…”
Section: Discussionmentioning
confidence: 99%
“…Low-pass filtering preserved frequencies below 500 Hz and attenuated higher frequencies, which made the verbal content unintelligible while retaining cues to emotion such as intonation, speech rate, and speech rhythm (Ben-David et al, 2013). The stimuli were presented at approximately 65 dB SPL.…”
Section: Methodsmentioning
confidence: 99%