2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) 2020
DOI: 10.1109/icrito48877.2020.9197899
|View full text |Cite
|
Sign up to set email alerts
|

Determining Attention Mechanism for Visual Sentiment Analysis of an Image using SVM Classifier in Deep learning based Architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…In the future, we plan to use the state-of-the-art attention mechanism [16] or object detection systems [50][51][52] to retrieve the corresponding affective regions. en, we will combine the whole image and the affective regions to obtain better performance and more powerful interpretability.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…In the future, we plan to use the state-of-the-art attention mechanism [16] or object detection systems [50][51][52] to retrieve the corresponding affective regions. en, we will combine the whole image and the affective regions to obtain better performance and more powerful interpretability.…”
Section: Discussionmentioning
confidence: 99%
“…Some state-of-the-art methods employed CNN to complete image sentiment analysis. Das et al [16] proposed a deep learning model including an attention mechanism for focusing local regions and determining the required sentiment. She et al [17] proposed a model called weakly supervised coupled network (WSCNet) that aims to automatically select relevant regions to reduce the burden of sentiment annotation and improve performance.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…6 Papiya Das et al [49] SVM classification layer was used on deep CNN architecture. On various visual datasets, the accuracies were 65.89% and 68.67%.…”
Section: By Incorporating Label Informationmentioning
confidence: 99%