2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852317
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Attentive Pyramidal Model for Visual Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…Zhang et al [132] modeled the correlation between object semantics in different image regions to infer the image sentiment based on Bayesian networks. A multi-attentive pyramidal model is proposed in [133] to extract local features at various scales, and then, a selfattention mechanism is employed to mine the relations between features of different regions.…”
Section: Learning-based Methodsmentioning
confidence: 99%
“…Zhang et al [132] modeled the correlation between object semantics in different image regions to infer the image sentiment based on Bayesian networks. A multi-attentive pyramidal model is proposed in [133] to extract local features at various scales, and then, a selfattention mechanism is employed to mine the relations between features of different regions.…”
Section: Learning-based Methodsmentioning
confidence: 99%
“…Woo et al [17] extend it and proposes a Convolutional block attention module that implements adaptively recalibrates spatial-wise feature responses. In recent years, inspired by channel and spatial attention, visual attention has been increasingly used in image sentiment analysis [18,19] . Unlike previous work, we develop a new attention module, Hybrid, which simultaneously models attribute dependencies between feature maps at adjacent semantic levels in the channel and spatial dimensions.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Self-attention is a crucial method for building relationships between queries and key features, which is beneficial for exploring the association between visual regions. He et al extracted local visual features by pyramid network and mined the association between local visual features through a self-attention mechanism [ 30 ]. Bera et al extracted semantic regions using SIFT key points and focused on the most relevant regions utilizing attention mechanisms [ 31 ].…”
Section: Related Workmentioning
confidence: 99%