Proceedings of the 25th ACM International Conference on Multimedia 2017
DOI: 10.1145/3123266.3123444
|View full text |Cite
|
Sign up to set email alerts
|

Affect Recognition in Ads with Application to Computational Advertising

Abstract: Advertisements (ads) often include strongly emotional content to leave a lasting impression on the viewer. This work (i) compiles an affective ad dataset capable of evoking coherent emotions across users, as determined from the affective opinions of five experts and 14 annotators; (ii) explores the efficacy of convolutional neural network (CNN) features for encoding emotions, and observes that CNN features outperform low-level audio-visual emotion descriptors upon extensive experimentation; and (iii) demonstra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
23
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 21 publications
(29 citation statements)
references
References 18 publications
(74 reference statements)
2
23
0
Order By: Relevance
“…Currently, the features used in affective video content analysis are mainly from two categories [107,109]. One is considering the stimulus of video content and extracting the features reflecting the emotions conveyed by the video content itself.…”
Section: Affective Computing Of Videosmentioning
confidence: 99%
See 1 more Smart Citation
“…Currently, the features used in affective video content analysis are mainly from two categories [107,109]. One is considering the stimulus of video content and extracting the features reflecting the emotions conveyed by the video content itself.…”
Section: Affective Computing Of Videosmentioning
confidence: 99%
“…In many cases, computations are conducted over each frame of the video, and the average values of the computational results of the overall video are considered as visual features. Specifically, the color-related features often contain the histogram and variance of color [20,107,177], the proportions of color [82,86], the number of white frame and fades [177], the grayness [20], darkness ratio, color energy [132,170], brightness ratio and saturation [85,86], etc. In addition, the differences of dark and light can be reflected by the lighting key, which is used to evoke emotions in video and draw the attention of viewers by creating an emotional atmosphere [82].…”
Section: Content-related Featuresmentioning
confidence: 99%
“…Classical works in visual affect recognition such as [16,17,59] focus on low level handcrafted features such as motion activity, color histograms etc. Recent video analytic works have leveraged the advances in deep learning [8,47]. Combined with the breakthroughs in deep learning since AlexNet [34], the Places database [60] enables training of deep learning models for scene attribute recognition on a large scale.…”
Section: Visual Content Analysis For Affect Recognitionmentioning
confidence: 99%
“…Defining valence as the feeling of pleasantness/unpleasantness and arousal as the intensity of emotional feeling while viewing an advertisement, five experts compiled a dataset of 100, roughly 1-minute long commercial advertisements (ads) in [47]. These ads are publicly available 1 and roughly uniformly distributed over the arousalvalence plane defined by Greenwald et al [14] (Figure 1(left)) as per the first impressions of 23 naive annotators.…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…Alternatively, a smart Viz UI could improve its current visualization with a potentially more intuitive one upon detecting high user workload. CLE generalizability has brightened with the success of deep convolutional neural networks (deep CNNs), which robustly learn problem-specific features and adapt effectively with minimal additional training [13]. We examined if user EEG responses obtained for the character, spatial pattern, bar graph and pie chart visualizations under different mental workload levels induced by the n-back task [2,7,10,15] had any similarities ( Figure 1).…”
Section: Introductionmentioning
confidence: 99%