2008
DOI: 10.1109/tmm.2008.2004911
|View full text |Cite
|
Sign up to set email alerts
|

Affective Level Video Segmentation by Utilizing the Pleasure-Arousal-Dominance Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(33 citation statements)
references
References 32 publications
0
31
0
Order By: Relevance
“…Affective video content analysis aims to automatic emotion recognition with applications in mood-based personalized content delivery, video indexing, and summarization [61,62], and ground truth data is needed both for training and benchmarking. The HUMAINE database [63], consisting of 50 clips from 1.5-3 min, is annotated with a wide range of labels, i.e., emotion-related states (intensity, arousal, valence, etc.…”
Section: Emotion Datasetsmentioning
confidence: 99%
“…Affective video content analysis aims to automatic emotion recognition with applications in mood-based personalized content delivery, video indexing, and summarization [61,62], and ground truth data is needed both for training and benchmarking. The HUMAINE database [63], consisting of 50 clips from 1.5-3 min, is annotated with a wide range of labels, i.e., emotion-related states (intensity, arousal, valence, etc.…”
Section: Emotion Datasetsmentioning
confidence: 99%
“…Thus, we are motivated to study modality fusion strategies that may benefit induced emotion recognition. In addition, the LSTM model has low performance for predicting movie induced emotions [31], 2 yet it has achieved leading performance in various emotion recognition tasks due to its ability to model temporal context (e.g. [32]).…”
Section: Previous Work On Liris-accede Databasementioning
confidence: 99%
“…Recently, increased attention has been paid to recognizing emotions in spectators induced by affective content due to potential applications, such as emotion-based content delivery [1] or video indexing and summarization [2]. However, recognizing the emotions induced by affective content remains a challenging task, with only weak to moderate correlations achieved between automatic predictions and human annotations [3].…”
Section: Introductionmentioning
confidence: 99%
“…Their training data consisted of 36 full-length popular Hollywood movies divided into 2040 scenes. Arifin and Cheung [10] based their framework on a hierarchical-coupled dynamic Bayesian network to model the dependencies between the Potential-Arousal-Dominance (PAD) dimensions. This model takes into consideration the influence of former emotional events.…”
Section: B Emotional Classificationmentioning
confidence: 99%