Proceedings of the 19th ACM International Conference on Multimodal Interaction 2017
DOI: 10.1145/3136755.3136796
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating content-centric vs. user-centric ad affect recognition

Abstract: Despite the fact that advertisements (ads) often include strongly emotional content, very little work has been devoted to a ect recognition (AR) from ads. This work explicitly compares contentcentric and user-centric ad AR methodologies, and evaluates the impact of enhanced AR on computational advertising via a user study. Speci cally, we (1) compile an a ective ad dataset capable of evoking coherent emotions across users; (2) explore the e cacy of content-centric convolutional neural network (CNN) features fo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(16 citation statements)
references
References 29 publications
(63 reference statements)
1
15
0
Order By: Relevance
“…1. This is one of the few works to examine AR in ads, extending findings reported in [22], [23]. It is also the only work to characterize ad emotions in terms of explicit human opinions, and underlying (contentcentric) audiovisual plus (user-centric) EEG features.…”
Section: Introductionmentioning
confidence: 53%
“…1. This is one of the few works to examine AR in ads, extending findings reported in [22], [23]. It is also the only work to characterize ad emotions in terms of explicit human opinions, and underlying (contentcentric) audiovisual plus (user-centric) EEG features.…”
Section: Introductionmentioning
confidence: 53%
“…For example, Yi and Wang' work [147] demonstrated that linear SVM is more suitable for classification than RBM, MLP, and LR . In [109], LDA, linear SVM (LSVM), and Radial Basis SVM (RSVM) classifiers are employed in emotion recognition experiments, and the RSVM obtained the best F1 scores. In [42], both Navie Bayes and SVM are used as classifiers in unimodal and multimodal conditions.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…Currently, the features used in affective video content analysis are mainly from two categories [107,109]. One is considering the stimulus of video content and extracting the features reflecting the emotions conveyed by the video content itself.…”
Section: Affective Computing Of Videosmentioning
confidence: 99%
“…Implicit tagging has been defined as using non-verbal spontaneous behavioral responses to find relevant tags for multimedia content [40]. These responses include Electroencephalography (EEG) and Magnetoencephalography (MEG) along with physiological responses such as heart rate and body temperature [2,31,48,49,54] and gazing behavior [13,41,44,50]. Determining the affective dimensions of valence and arousal using these methods has been shown to be very effective.…”
Section: Human Centered Affect Recognitionmentioning
confidence: 99%
“…Past research has also shown that affect-based Accepted for publication in the Proceedings of 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA.. DOI: 10.1145/3242969.3242988. video ad analytics compares favorably [58] to visual similarity and text-based analysis [38]. Affect based content analysis works well in combination with behavioural read-outs of cognitive load and arousal [27] as well as user attention [48]. Computational work on affect-based ad analysis has shown that analyzing even 10 second segments can give meaningful prediction of induced emotional states [47,48,58].…”
Section: Introductionmentioning
confidence: 99%