Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge 2021
DOI: 10.1145/3475957.3484457
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Mutimodal Fusion for Dimensional Emotion Recognition

Abstract: In this paper, we extensively present our solutions for the MuSe-Stress sub-challenge and the MuSe-Physio sub-challenge of Multimodal Sentiment Challenge (MuSe) 2021. The goal of MuSe-Stress sub-challenge is to predict the level of emotional arousal and valence in a time-continuous manner from audio-visual recordings and the goal of MuSe-Physio sub-challenge is to predict the level of psycho-physiological arousal from a) human annotations fused with b) galvanic skin response (also known as Electrodermal Activi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…However, these two types of fusion methods cannot effectively explore the inter-modality dynamics. Recently proposed fusion methods can be categorized into several types as follows, i.e., multi-view learning methods [32,41], word-level fusion methods [15,35,42], tensor fusion [24,40], and hybrid fusion [9,25]. These fusion techniques are effective in learning inter-modality dynamics compared to feature-level and decision-level fusion and show marked performance gains.…”
Section: Multimodal Fusionmentioning
confidence: 99%
“…However, these two types of fusion methods cannot effectively explore the inter-modality dynamics. Recently proposed fusion methods can be categorized into several types as follows, i.e., multi-view learning methods [32,41], word-level fusion methods [15,35,42], tensor fusion [24,40], and hybrid fusion [9,25]. These fusion techniques are effective in learning inter-modality dynamics compared to feature-level and decision-level fusion and show marked performance gains.…”
Section: Multimodal Fusionmentioning
confidence: 99%
“…Emotions play a central role in the human experience, exerting a profound influence on both physiological and psychological states. This influence holds promise for diverse applications, including Internet of Things (IoT) devices (Abdallah et al, 2020;Fodor et al, 2023), safe driving practices (Ma et al, 2021), software engineering (Fritz et al, 2014), and beyond. Research in this domain has sought to capture and interpret emotional states through a variety of signals.…”
Section: Introductionmentioning
confidence: 99%
“…Visual information in social media offers useful information and enables people to share their instant psychological and physiological status [11]. Visual sentiment recognition plays a considerable role in understanding the sentimental response when humans see specific visual content.…”
Section: Introductionmentioning
confidence: 99%