2019 International Conference on Multimodal Interaction 2019
DOI: 10.1145/3340555.3353731
|View full text |Cite
|
Sign up to set email alerts
|

Controlling for Confounders in Multimodal Emotion Classification via Adversarial Learning

Abstract: Various psychological factors affect how individuals express emotions. Yet, when we collect data intended for use in building emotion recognition systems, we often try to do so by creating paradigms that are designed just with a focus on eliciting emotional behavior. Algorithms trained with these types of data are unlikely to function outside of controlled environments because our emotions naturally change as a function of these other factors. In this work, we study how the multimodal expressions of emotion ch… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…The experimental results showed the dominance of GET as compared to other models of image captioning. Experimental results on all listed MMID models are compiled and listed separately for each R-LSTM [39] RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM [47] Stack-VS [48] VSR [49] GLA [50] Up-Down [51] MAGAN [55] MGAN [56] R 0 5 10 15 20 25 30 35 40 RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM [47] Stack-VS [48] VSR [49] GLA [50] Up-Down [51] MAGAN [55] MGAN [56] S R-LSTM [39] RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM [47] Stack-VS [48] VSR [49] GLA [50] Up-Down R-LSTM [39] RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM…”
Section: Multimodal Image Description Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The experimental results showed the dominance of GET as compared to other models of image captioning. Experimental results on all listed MMID models are compiled and listed separately for each R-LSTM [39] RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM [47] Stack-VS [48] VSR [49] GLA [50] Up-Down [51] MAGAN [55] MGAN [56] R 0 5 10 15 20 25 30 35 40 RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM [47] Stack-VS [48] VSR [49] GLA [50] Up-Down [51] MAGAN [55] MGAN [56] S R-LSTM [39] RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM [47] Stack-VS [48] VSR [49] GLA [50] Up-Down R-LSTM [39] RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM…”
Section: Multimodal Image Description Resultsmentioning
confidence: 99%
“…CRNN [38] R-LSTM [39] RFNet [40] He [42] Feng [43] GET [44] Wang [45] FCN-LSTM [46] Bag-LSTM [47] Stack-VS [48] VSR [49] GLA [50] Up-Down [51] MAGAN [55] MGAN [56] M…”
Section: Multimodal Image Description Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this framework, audio and textual modalities are used for the detection of emotions. M. Jaiswal et al [67] analyzes the change of emotional expressions under various stress levels of an individual. The performance of this task is affected by the degree of lexical or acoustic features.…”
Section: Other MMDL Applicationsmentioning
confidence: 99%