Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-Hml) 2018
DOI: 10.18653/v1/w18-3301
|View full text |Cite
|
Sign up to set email alerts
|

Getting the subtext without the text: Scalable multimodal sentiment classification from visual and acoustic modalities

Abstract: In the last decade, video blogs (vlogs) have become an extremely popular method through which people express sentiment. The ubiquitousness of these videos has increased the importance of multimodal fusion models, which incorporate video and audio features with traditional text features for automatic sentiment detection. Multimodal fusion offers a unique opportunity to build models that learn from the full depth of expression available to human viewers. In the detection of sentiment in these videos, acoustic an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…The Maximum Entropy classifier was designed to integrate audio and text processing into a single system, and this model outperformed other conventional classifiers. Blanchard et al (2018) developed a fusion technique for audio and video modalities using audio and video features to analyze spoken sentences for the sentiment. They did not consider the traditional transcription features to minimize human intervention.…”
Section: Related Workmentioning
confidence: 99%
“…The Maximum Entropy classifier was designed to integrate audio and text processing into a single system, and this model outperformed other conventional classifiers. Blanchard et al (2018) developed a fusion technique for audio and video modalities using audio and video features to analyze spoken sentences for the sentiment. They did not consider the traditional transcription features to minimize human intervention.…”
Section: Related Workmentioning
confidence: 99%
“…Authors evaluated their approach on the different datasets and reported improved accuracies in the range of 2-3% over the state-of-the-art models. Blanchard et al (2018) proposed a multi-modal fusion model that exclusively uses high-level visual and acoustic features for sentiment classification.…”
Section: Problem Definitionmentioning
confidence: 99%
“…We compare our proposed approach against various existing systems (Nojavanasghari et al, 2016;Rajagopalan et al, 2016;Zadeh et al, , 2018aBlanchard et al, 2018) that made use of the same datasets. A comparative study is shown in Table 7.…”
Section: Comparative Analysismentioning
confidence: 99%
“…For each dataset, we report three best systems for the comparisons 2 . In particular, we compare with the following systems: Bag of Feature -Multimodal Sentiment Analysis (BoF-MSA) (Blanchard et al, 2018), Memory Fusion Network (MFN) (Zadeh et al, 2018b), Deep Fusion -Deep Neural Network (DF-DNN) (Nojavanasghari et al, 2016), Multi View -LSTM (MV-LSTM) (Rajagopalan et al, 2016), Early Fusion -LSTM (EF-LSTM) (Zadeh et al, 2018c), Tensor Fusion Network (TFN) , Random Forest (RF) (Breiman, 2001), Support Vector Machine (Zadeh et al, 2016), Multi-Attention Recurrent Network (MARN) (Zadeh et al, 2018a), Dynamic Fusion Graph (DFG) (Zadeh et al, 2018c), Multi Modal Multi Utterance-Bimodal Attention (MMMU-BA) (Ghosal et al, 2018), Bi-directional Contextual LSTM (BC-LSTM) (Poria et al, 2017b) and Multimodal Factorization Model (MFM) (Tsai et al, 2018).…”
Section: Comparative Analysismentioning
confidence: 99%
“…Poria et al (2017a) presented a literature survey on various affect dimensions e.g., sentiment analysis, emotion analysis, etc., for the multi-modal analysis. A multi-modal fusion-based approach is proposed in (Blanchard et al, 2018) for sentiment classification. The author used exclusively high-level fusion of visual and acoustic features to classify the sentiment.…”
Section: Related Workmentioning
confidence: 99%