2020
DOI: 10.1007/978-3-030-63031-7_26
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Sentiment Analysis with Multi-perspective Fusion Network Focusing on Sense Attentive Language

Abstract: Multimodal sentiment analysis aims to learn a joint representation of multiple features. As demonstrated by previous studies, it is shown that the language modality may contain more semantic information than that of other modalities. Based on this observation, we propose a Multi-perspective Fusion Network(MPFN) focusing on Sense Attentive Language for multimodal sentiment analysis. Different from previous studies, we use the language modality as the main part of the final joint representation, and propose a mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 20 publications
0
9
0
Order By: Relevance
“…-A sense-level attention framework is designed to learn the word representation dynamically using the fusion of different methods. (Li & Chen, 2020) It achieves the most significant performance improvement.…”
Section: It Does Not Provide Better Performancementioning
confidence: 97%
See 1 more Smart Citation
“…-A sense-level attention framework is designed to learn the word representation dynamically using the fusion of different methods. (Li & Chen, 2020) It achieves the most significant performance improvement.…”
Section: It Does Not Provide Better Performancementioning
confidence: 97%
“…Multi‐perspective fusion network‐based MSA (Li & Chen, 2020) concentrates on sense attentive language. A sense‐level attention network was utilized to learn the word depiction, which is directed through a fusion of numerous modalities.…”
Section: Related Workmentioning
confidence: 99%
“…A multiperspective fusion network was proposed by the researchers (X. Li & Chen, 2020) to perform multimodal sentiment analysis with the help of language modality on different datasets like CMU-MOSEI and CMU-MOSI. The main idea behind the use of language modality is that the authors believe it to have more information than visual and acoustic modalities.…”
Section: Challenges Involved In Multimodal Sentiment Analysismentioning
confidence: 99%
“…They conducted experiments on two datasets for their research: MELD and IEMOCAP. Li et al [29] introduced a method for MSA that utilized a multi-perspective fusion network. They researched CMU-MOSI, MOSEI, and YouTube public datasets and found that their method improved accuracy by 2.9 percent.…”
Section: Literature Reviewmentioning
confidence: 99%