2011
DOI: 10.1016/j.actpsy.2011.02.004
|View full text |Cite
|
Sign up to set email alerts
|

Preattentive processing of audio-visual emotional signals

Abstract: Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

5
45
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 50 publications
(54 citation statements)
references
References 49 publications
(73 reference statements)
5
45
0
1
Order By: Relevance
“…The task was adapted from [5] where details of the stimulus generation and evaluation are described. Auditory stimuli consisted of short sound tracks of voices speaking out bisyllabic German pseudowords (“lolo”, “tete”, or “gigi”) at a sound level varying between 65 and 72 dB, presented via two loudspeakers located at either side of a computer screen (width = 36 cm).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The task was adapted from [5] where details of the stimulus generation and evaluation are described. Auditory stimuli consisted of short sound tracks of voices speaking out bisyllabic German pseudowords (“lolo”, “tete”, or “gigi”) at a sound level varying between 65 and 72 dB, presented via two loudspeakers located at either side of a computer screen (width = 36 cm).…”
Section: Methodsmentioning
confidence: 99%
“…For both congruent and incongruent audio-visual trials the audio track and the video stream originated from independent recordings. This allowed compensating for possible minimal temporal misalignments when independent audio and visual streams were combined ([5], for details).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, a ‘fearful’ voice is more likely to be perceived as ‘fearful’ if accompanied by a ‘fearful’ face rather than a ‘happy’ one, and emotionally congruent audiovisual stimuli are responded to faster than incongruent ones (e.g., Dolan et al 2001). This type of integration happens automatically (e.g., Föcker et al 2011; Vroomen et al 2001) with quick neural consequences (e.g., Pourtois et al 2000). …”
Section: Introductionmentioning
confidence: 99%