2020
DOI: 10.1109/access.2020.2969205
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Modal Sentiment Classification With Independent and Interactive Knowledge via Semi-Supervised Learning

Abstract: Multi-modal sentiment analysis extends conventional text-based definition of sentiment analysis to a multi-modal setup where multiple relevant modalities are leveraged to perform sentiment analysis. In real applications, however, acquiring annotated multi-modal data is normally labor expensive and time-consuming. In this paper, we aim to reduce the annotation effort for multi-modal sentiment classification via semi-supervised learning. The key idea is to leverage the semi-supervised variational autoencoders to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…Yu et al [2,3] first introduced the multimodal factorized bilinear pooling approach to integrating image and text features and then used the co-attention mechanism to jointly learn the image and text attentions. Moreover, Zhang et al [27] developed semi-supervised variational autoencoders to extract independent knowledge from single-modality data and interactive knowledge from different modalities. Furthermore, Wang et al [28] proposed an end-to-end fusion method to perform the MSC task.…”
Section: Related Workmentioning
confidence: 99%
“…Yu et al [2,3] first introduced the multimodal factorized bilinear pooling approach to integrating image and text features and then used the co-attention mechanism to jointly learn the image and text attentions. Moreover, Zhang et al [27] developed semi-supervised variational autoencoders to extract independent knowledge from single-modality data and interactive knowledge from different modalities. Furthermore, Wang et al [28] proposed an end-to-end fusion method to perform the MSC task.…”
Section: Related Workmentioning
confidence: 99%
“…Ji et al [29] designed a LSTM based semisupervised attention framework, experiments demonstrated that the possibility of the framework for sentiment analysis. Zhang et al [30] used semi-supervised learning to reduce the annotation work for multimodal sentiment classification, the effectiveness of the proposed semi-supervised approach was indicated in experiments.…”
Section: Related Workmentioning
confidence: 99%
“…Zhang, etc. [28] aimed to reduce the annotation effort for multi-modal sentiment classification via semi-supervised learning method. Its key idea was to use the semi-supervised variational autoencoders to detect more information from unlabeled data for multi-modal sentiment analysis.…”
Section: Related Work a Sentiment Analysis And Its Classificationmentioning
confidence: 99%