Proceedings of the 16th International Conference on Multimodal Interaction 2014
DOI: 10.1145/2663204.2666271
|View full text |Cite
|
Sign up to set email alerts
|

Emotion Recognition in the Wild

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 21 publications
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…These methods can be broadly categorized as either feature level fusion or decision level fusion (Wu et al, 2014). The former is generally carried out by concatenating feature vectors from different modalities (Metallinou et al, 2012;Kim and Clements, 2015), while the latter involves developing indepedent unimodal predictive models and then aggregating the predictions from each modality (Ringeval et al, 2014;Sahoo and Routray, 2016). A combination of both is also possible, for instance Metallinou et al (2012) first adopts feature level fusion with different weights assigned to audio and video modalities followed by model level fusion to learn a joint representation from multiple modalities.…”
Section: Related Workmentioning
confidence: 99%
“…These methods can be broadly categorized as either feature level fusion or decision level fusion (Wu et al, 2014). The former is generally carried out by concatenating feature vectors from different modalities (Metallinou et al, 2012;Kim and Clements, 2015), while the latter involves developing indepedent unimodal predictive models and then aggregating the predictions from each modality (Ringeval et al, 2014;Sahoo and Routray, 2016). A combination of both is also possible, for instance Metallinou et al (2012) first adopts feature level fusion with different weights assigned to audio and video modalities followed by model level fusion to learn a joint representation from multiple modalities.…”
Section: Related Workmentioning
confidence: 99%