2008
DOI: 10.1007/s10548-008-0051-8
|View full text |Cite
|
Sign up to set email alerts
|

Emotional Pre-eminence of Human Vocalizations

Abstract: Human vocalizations (HV), as well as environmental sounds, convey a wide range of information, including emotional expressions. The latter have been relatively rarely investigated, and, in particular, it is unclear if duration-controlled non-linguistic HV sequences can reliably convey both positive and negative emotional information. The aims of the present psychophysical study were: (i) to generate a battery of duration-controlled and acoustically controlled extreme valence stimuli, and (ii) to compare the em… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 35 publications
0
9
0
Order By: Relevance
“…Studying the influence of affective valence on auditory processing is hampered by confounds due to differences in acoustical features inherent to positive negative and neutral sounds (Aeschlimann et al, 2008). In addition, the study of affective sound processing using event-related brain potentials would only be possible with short sounds, so that the relevant information becomes available with a more or less constant timing.…”
Section: Discussionmentioning
confidence: 99%
“…Studying the influence of affective valence on auditory processing is hampered by confounds due to differences in acoustical features inherent to positive negative and neutral sounds (Aeschlimann et al, 2008). In addition, the study of affective sound processing using event-related brain potentials would only be possible with short sounds, so that the relevant information becomes available with a more or less constant timing.…”
Section: Discussionmentioning
confidence: 99%
“…In our sound isolation booth, participants were seated and asked to rate each stimulus along a 5-point Likert scale: 1) Little or no emotional content, to 5) High levels of emotional content. Note that this scale does not discriminate between positive or negative valence within the stimuli; this scale simply provides a measure of total emotional content (Aeschlimann et al, 2008). Cronbach's α scores were calculated to ensure the reliability of this measure (Cronbach, 1951); the entire set of subjects produced a value of 0.8846 and subsequent removal of each subject individually from the group data consistently produced values between 0.8458 and 0.894, well above the accepted consistency score of 0.7 (Nunnally, 1978).…”
Section: Methodsmentioning
confidence: 99%
“…To assess whether these groups of human and animal vocalizations differed acoustically, we statistically compared the spectrograms (defined with Matlab's spectrogram function with no overlapping and zero padding), using a time-frequency bin width of ϳ5 ms and ϳ74 Hz. Statistical contrasts entailed a series of nonparametric t tests based on a bootstrapping procedure with 5000 iterations per time-frequency bin to derive an empirical distribution against which to compare the actual difference between the mean spectrograms from each sound category (Aeschlimann et al, 2008;Knebel et al, 2008;De Lucia et al, 2009, 2010b. Note that there was no grouping or averaging of the spectrograms either for a given object or for a given category.…”
Section: Methodsmentioning
confidence: 99%