2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS) 2018
DOI: 10.1109/ctems.2018.8769162
|View full text |Cite
|
Sign up to set email alerts
|

An Ensemble Approach to Utterance Level Multimodal Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…Contributions. While most of the current works [9,10,11,12] neglect the indispensable role of the cognitive cues in multimodal sentiment analysis, our proposed framework in this paper enhances analytical results by leveraging userspecific latent characteristics. To the best of our knowledge, we reveal the utopian spirit of the hidden parameters and devise the first approach based on an adaptive tree that utilizes an attention-based fusion to facilitate cognitiveoriented knowledge transfer within the tree, forming an ensemble of submodels.…”
Section: Fig 1: Base Models Accuracy Comparisonmentioning
confidence: 99%
“…Contributions. While most of the current works [9,10,11,12] neglect the indispensable role of the cognitive cues in multimodal sentiment analysis, our proposed framework in this paper enhances analytical results by leveraging userspecific latent characteristics. To the best of our knowledge, we reveal the utopian spirit of the hidden parameters and devise the first approach based on an adaptive tree that utilizes an attention-based fusion to facilitate cognitiveoriented knowledge transfer within the tree, forming an ensemble of submodels.…”
Section: Fig 1: Base Models Accuracy Comparisonmentioning
confidence: 99%
“…The sentiment analysis and affective computing have a lot of challenges such as subjectivity analysis, aspect detection and consideration, topic detection and tracking, document summarization . The earlier research concentrates on textual data for sentiment analysis but more recently multimodal data such as textual‐acoustic‐visual modality were considered for sentiment classification and affective computing . Usually, handcrafted features or lexicons or networks or ontologies are used in sentiment classification.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In the early literature low-level handcrafted (manually extracted) features such as lexicon representation for textual data, low-level descriptors for speech and facial expressions for video modalities (Rosas et al, 2013) or ontologies (Mahmoud et al, 2018) or sentiment lexicons (Mohammad et al, 2013) were considered. Instead of simplistic fusion techniques such as feature level fusion (Pérez- Rosas et al, 2013) or model-based fusion (Du et al, 2018) or decision level fusion (Huddar et al, 2018), shallow fusion techniques were proposed to understand the complex correlation between modalities.…”
Section: Related Workmentioning
confidence: 99%
“…Recent literature in multimodal affective computing uses either early fusion (also known as feature concatenation) (Pérez- Rosas et al, 2013) or model-based fusion (Du et al, 2018) or decision fusion (also known as late fusion) (Huddar et al, 2018). In early or feature level fusion, features from different modalities are concatenated.…”
Section: Problem With Early Model-based and Late Fusionmentioning
confidence: 99%
See 1 more Smart Citation