2023
DOI: 10.3390/s23020661
|View full text |Cite
|
Sign up to set email alerts
|

VisdaNet: Visual Distillation and Attention Network for Multimodal Sentiment Classification

Abstract: Sentiment classification is a key task in exploring people’s opinions; improved sentiment classification can help individuals make better decisions. Social media users are increasingly using both images and text to express their opinions and share their experiences, instead of only using text in conventional social media. As a result, understanding how to fully utilize them is critical in a variety of activities, including sentiment classification. In this work, we provide a fresh multimodal sentiment classifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 48 publications
0
9
0
Order By: Relevance
“…Later, in Section 6.3 , we demonstrate through experiments that there is an association between user features and sentiment values in the dataset. Therefore, in this section, we make use of the aforementioned user feature theory and incorporate the user features into our previously proposed VisdaNet [ 29 ] model. We improve the “word encoder with word attention” layer in the VisdaNet [ 29 ] model and propose the “word encoder with user behavior attention” in this section.…”
Section: Usbvisdanetmentioning
confidence: 99%
See 4 more Smart Citations
“…Later, in Section 6.3 , we demonstrate through experiments that there is an association between user features and sentiment values in the dataset. Therefore, in this section, we make use of the aforementioned user feature theory and incorporate the user features into our previously proposed VisdaNet [ 29 ] model. We improve the “word encoder with word attention” layer in the VisdaNet [ 29 ] model and propose the “word encoder with user behavior attention” in this section.…”
Section: Usbvisdanetmentioning
confidence: 99%
“…Therefore, in this section, we make use of the aforementioned user feature theory and incorporate the user features into our previously proposed VisdaNet [ 29 ] model. We improve the “word encoder with word attention” layer in the VisdaNet [ 29 ] model and propose the “word encoder with user behavior attention” in this section. We call the improved model the User Behavior Visual Distillation and Attention Network (UsbVisdaNet), as shown in Figure 2 .…”
Section: Usbvisdanetmentioning
confidence: 99%
See 3 more Smart Citations