2014
DOI: 10.48550/arxiv.1411.5731
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual Sentiment Prediction with Deep Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
39
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(40 citation statements)
references
References 13 publications
1
39
0
Order By: Relevance
“…In the early studies, a CNN is often directly used as the off-the-shelf tool without any modification. For example, Xu et al [30] trained two classifiers following the two fully connected (FC) layers (FC7 and FC8) of an existing basic network (AlexNet), respectively. The experimental results show that the classifier after the FC7 (0.649) layer performs better than that after the FC8 (0.615).…”
Section: Learning-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In the early studies, a CNN is often directly used as the off-the-shelf tool without any modification. For example, Xu et al [30] trained two classifiers following the two fully connected (FC) layers (FC7 and FC8) of an existing basic network (AlexNet), respectively. The experimental results show that the classifier after the FC7 (0.649) layer performs better than that after the FC8 (0.615).…”
Section: Learning-based Methodsmentioning
confidence: 99%
“…Quantitative Comparison of Representative Deep Methods. As shown in Table 8, we conduct experiments to fairly compare four representative learning-based methods, including DCNN [30], RCA [107], WSCNet [12], and PDANet [111]. We replace the original backbone with four different backbones to evaluate the effectiveness and robustness, including AlexNet [136], VGG-16 [114], ResNet-50 [18], and Inception-v3 [137].…”
Section: Learning-based Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Global features are directly extracted from the whole images. One direct and intuitive method is to employ the output of the last few fully connected (FC) layers as deep features, using either pretrained or finetuned CNN models [59,4,69]. The last few FC layers correspond to high-level semantic features, which might be not enough to represent emotions, especially for abstract images.…”
Section: Deep Featuresmentioning
confidence: 99%