Proceedings of the 29th ACM International Conference on Multimedia 2021
DOI: 10.1145/3474085.3475692
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting BERT for Multimodal Target Sentiment Classification through Input Space Translation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(35 citation statements)
references
References 37 publications
0
35
0
Order By: Relevance
“…Since contents in different modalities are often closely related, exploiting such multimodal information can help better analyze users' sentiments towards different aspects. Recent studies on multimodal ABSA mainly concentrate on simple ABSA tasks such as multimodal ATE [182,183] and multimodal ASC [184,185,186,187]. To align the information from different modalities, the text and image are often first encoded to feature representations, then some interaction networks are designed to fuse the information for making the final prediction.…”
Section: Multimodal Absamentioning
confidence: 99%
“…Since contents in different modalities are often closely related, exploiting such multimodal information can help better analyze users' sentiments towards different aspects. Recent studies on multimodal ABSA mainly concentrate on simple ABSA tasks such as multimodal ATE [182,183] and multimodal ASC [184,185,186,187]. To align the information from different modalities, the text and image are often first encoded to feature representations, then some interaction networks are designed to fuse the information for making the final prediction.…”
Section: Multimodal Absamentioning
confidence: 99%
“…Multimodal Aspect-Based Sentiment Analysis. As an important sentiment analysis task, many approaches have been approached to tackle the three subtasks of MABSA, including Multimodal Aspect Term Extraction Wu et al, 2020a,b;Sun et al, 2020;, Multimodal Aspect Sentiment Classification (Xu et al, 2019;Yu et al, 2020a;Yang et al, 2021a;Khan and Fu, 2021) and Joint Multimodal Aspect-Sentiment Analysis (Ju et al, 2021). In this work, we aim to propose a general pre-training framework to improve the performance of all the three subtasks.…”
Section: Related Workmentioning
confidence: 99%
“…1) TomBERT (Yu and Jiang, 2019), which tackles the MASC task by employing BERT to capture intra-modality dynamics. 2) CapTrBERT (Khan and Fu, 2021), which translates the image to a caption as an auxiliary sentence for sentiment classification.…”
Section: Compared Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Over the past decades, statistical knowledge and shallow machine learning algorithms have been frequently utilized in MSA [7,11]. As shallow machine learning is limited by feature engineering and massive manual work for data annotation, deep learning becomes the mainstream technique for MSA [12,33,36,43] in an end-to-end manner [38]. In a sense, the key to developing deep learning-based MSA models lies in two aspects: multimodal representation learning and cross-modal fusion.…”
Section: Related Workmentioning
confidence: 99%