Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge 2015
DOI: 10.1145/2808196.2811640
|View full text |Cite
|
Sign up to set email alerts
|

An Investigation of Annotation Delay Compensation and Output-Associative Fusion for Multimodal Continuous Emotion Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

8
61
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 66 publications
(70 citation statements)
references
References 13 publications
8
61
1
Order By: Relevance
“…On the other hand, delays for absolute emotions vary for arousal and valence as well as for different databases. The delays for absolute affect dimensions yielded similar results as previous studies on RECOLA (Huang et al, 2015a) and SEMAINE (Nicolle et al, 2012;Mariooryad and Busso, 2014).…”
Section: Delay Compensationsupporting
confidence: 84%
See 3 more Smart Citations
“…On the other hand, delays for absolute emotions vary for arousal and valence as well as for different databases. The delays for absolute affect dimensions yielded similar results as previous studies on RECOLA (Huang et al, 2015a) and SEMAINE (Nicolle et al, 2012;Mariooryad and Busso, 2014).…”
Section: Delay Compensationsupporting
confidence: 84%
“…The OA-RVM performances in ρ c were higher than the audio-only results but somewhat lower than the multimodal results in Huang et al (2015a) on the RECOLA dataset. For SEMAINE, the OA-RVM performances were much lower in Pearson's correlation ρ when compared with the winners of the AVEC 2012 challenge (Nicolle et al, 2012), who achieve 0.65 (arousal) and 0.33 (valence) on development set, 0.61(arousal), and 0.34 (valence) on test set.…”
Section: Delay Compensationcontrasting
confidence: 55%
See 2 more Smart Citations
“…One common approach when considering multiple modalities is early (aka feature-level) fusion of unimodal information. This is typically achieved by concatenating all the features from multiple modalities into one combined feature vector, which is then used as the input information for the models [16], [43]- [45]. A benefit of early fusion is that it can provide better discriminative ability to the model by exploiting the complementary information that exists among different modalities.…”
Section: Related Workmentioning
confidence: 99%