2018
DOI: 10.1007/978-3-319-73600-6_15
|View full text |Cite
|
Sign up to set email alerts
|

Implicit Affective Video Tagging Using Pupillary Response

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 27 publications
0
8
0
Order By: Relevance
“…Our method also has the highest F1-score among all 3 methods. It is worth noting that our method does not result in lower F1-score because of the sample imbalance reported by Gui et al [14], which shows CorrFeat can better generalize the distribution of data according to their correlations.…”
Section: Comparison With ML and Dl Methodsmentioning
confidence: 62%
See 2 more Smart Citations
“…Our method also has the highest F1-score among all 3 methods. It is worth noting that our method does not result in lower F1-score because of the sample imbalance reported by Gui et al [14], which shows CorrFeat can better generalize the distribution of data according to their correlations.…”
Section: Comparison With ML and Dl Methodsmentioning
confidence: 62%
“…Our method achieves the highest accuracy among other methods with only SC and PD. Gui et al [14] also obtain good result with only PD. However, they only validate their method on 23 subjects among all 27 in MAHNOB-HCI database.…”
Section: Comparison With ML and Dl Methodsmentioning
confidence: 89%
See 1 more Smart Citation
“…We chose the 12 videos according to 2D emotion annotations from the self-reports in MAHNOB dataset [ 97 ]. We use the videos in MAHNOB dataset because it is a widely used dataset [ 98 , 99 ] with emotion self-reports from more than 30 reviewers. We selected more videos compared with CASE because we aim to collect more samples for each emotion.…”
Section: Datasetsmentioning
confidence: 99%
“…Wang et al [132] used EEG signals to construct a new EEG feature with the assistance of the relationship among video content by exploiting canonical correlation analysis (CCA). In [40], some viewer-related features are extracted from the whole pupil dilation ratio time-series without the differences among pupil diameter in human eyes, such as its average and derivation for global features as well as the four spectral power bands for local features.…”
Section: Viewer-related Featuresmentioning
confidence: 99%