2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489099
|View full text |Cite
|
Sign up to set email alerts
|

The OMG-Emotion Behavior Dataset

Abstract: This paper is the basis paper for the accepted IJCNN challenge One-Minute Gradual-Emotion Recognition (OMG-Emotion) 1 by which we hope to foster long-emotion classification using neural models for the benefit of the IJCNN community. The proposed corpus has as novelty the data collection and annotation strategy based on emotion expressions which evolve over time into a specific context. Different from other corpora, we propose a novel multimodal corpus for emotion expression recognition, which uses gradual anno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
86
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 90 publications
(86 citation statements)
references
References 20 publications
0
86
0
Order By: Relevance
“…Table 3 presents the performance of the models in terms of F1 on the development set (note that the annotations of the test set are not publicly available). From the table, we can see that on this database, our classic monomodal models outperform the other methods reported in the literature [47], i. e., Support Vector Machine (SVM) and Random Forest (RF). More specifically, our classic monomodal models yields higher F1 than SVM (36.5 % vs 33.0 %) for audio, and than RF (37.9 % vs 37.0 %) for video.…”
Section: Results On Omg-emotionmentioning
confidence: 75%
“…Table 3 presents the performance of the models in terms of F1 on the development set (note that the annotations of the test set are not publicly available). From the table, we can see that on this database, our classic monomodal models outperform the other methods reported in the literature [47], i. e., Support Vector Machine (SVM) and Random Forest (RF). More specifically, our classic monomodal models yields higher F1 than SVM (36.5 % vs 33.0 %) for audio, and than RF (37.9 % vs 37.0 %) for video.…”
Section: Results On Omg-emotionmentioning
confidence: 75%
“…To annotate the videos, the listeners used a modified version of the KT-Annotation Tool [20] which was designed as a dynamic tool for collecting dataset annotations. The tool provides annotators with a web-based system that can be adjusted for different annotation scenarios.…”
Section: B Self-assessment Annotationmentioning
confidence: 99%
“…Improving Happy cluster centroid proximity through (9) will discretize the 2D Valence Arousal Space, e.g. in 400 (20 × 20) classes and then apply the proposed approach between different in-the-wild datasets, such as AffWild, RECOLA [20] and OMG-Emotion Behavior Dataset [1].…”
Section: Conclusion and Further Workmentioning
confidence: 99%