2015
DOI: 10.1371/journal.pone.0138198
|View full text |Cite|
|
Sign up to set email alerts
|

Predicting the Valence of a Scene from Observers’ Eye Movements

Abstract: Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from ey… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
31
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(32 citation statements)
references
References 40 publications
1
31
0
Order By: Relevance
“…While observed ER results are modest, they are still better or competitive with respect to prior eye and neural based ER approaches. Eye based features are found to achieve 52.5% valence recognition accuracy in [17], where emotions are induced in viewers by presenting a diverse types of emotional scenes to viewers as against emotional faces specifically employed in this work. Also, prior neural-based emotional studies [6], [24] achieve only around 60% valence recognition with lab-grade sensors.…”
Section: Valence (+Ve Vs -Ve Emotion) Recognitionmentioning
confidence: 98%
See 2 more Smart Citations
“…While observed ER results are modest, they are still better or competitive with respect to prior eye and neural based ER approaches. Eye based features are found to achieve 52.5% valence recognition accuracy in [17], where emotions are induced in viewers by presenting a diverse types of emotional scenes to viewers as against emotional faces specifically employed in this work. Also, prior neural-based emotional studies [6], [24] achieve only around 60% valence recognition with lab-grade sensors.…”
Section: Valence (+Ve Vs -Ve Emotion) Recognitionmentioning
confidence: 98%
“…Liu et al [5] perform ER with differential autoencoders, and attempt cross-modal ER based on shared EEG and eye-based representations. Tavakoli et al [17] perform valence (+ve vs -ve emotion) recognition using eye movements with an emphasis on evaluating eye-based features and their fusion, and achieve 52.5% accuracy with discriminative feature selection and a linear SVM.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Wang et al [3] combined eye tracking with computational attention models in order to screen for mental diseases such as autism spectrum disorder (see also [4], [5]). Tavakoli et al [6] investigated the influence of eye-movement-based features to determine the valence of images.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, usercentric AR methods [14]- [16] estimate the stimulus-evoked emotion based on physiological changes observed in viewers (content consumers). Physiological signals indicative of emotions include pupillary dilation [26], eye-gaze patterns [9], [27] and neural activity [14], [15], [28]. Both content and user centric methods require labels denoting stimulus emotion, and such labels are compiled from annotators whose affective opinions are deemed acceptable [29], [30], given that emotion perception is highly subjective.…”
Section: Affect Recognitionmentioning
confidence: 99%