2016
DOI: 10.1007/s10579-016-9377-0
|View full text |Cite
|
Sign up to set email alerts
|

The JESTKOD database: an affective multimodal database of dyadic interactions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…Our experimental setup of-fers flexibility to incorporate several other features such as synchrony for improved CER. As a future study, we aim to conduct the current analysis on our in-house JESTKOD database that is also structured to address affective dyadic interactions in agreement and disagreement scenarios [24].…”
Section: Discussionmentioning
confidence: 99%
“…Our experimental setup of-fers flexibility to incorporate several other features such as synchrony for improved CER. As a future study, we aim to conduct the current analysis on our in-house JESTKOD database that is also structured to address affective dyadic interactions in agreement and disagreement scenarios [24].…”
Section: Discussionmentioning
confidence: 99%
“…The experiment was set up with eye-trackers to ensure the accuracy of eye gaze data collected, similarly to previous works [20,25]. In contrast with the other motion capture datasets [2,18,20,25], rather than conducting the experiment around a table [20,25] where only hand movements or upper body motions are collected, we implemented a free-standing setting in our setup. This configuration enables us to collect whole-body motion data.…”
Section: Available Hhi Datasetsmentioning
confidence: 99%
“…The latter is considerably less frequent due to its tedious manual annotation process (McCowan et al, 2005;Douglas-Cowie et al, 2007;McKeown et al, 2010;Lücking et al, 2012;Vella and Paggio, 2013;Vandeventer et al, 2015;Naim et al, 2015;Chou et al, 2017;Paggio and Navarretta, 2017;Cafaro et al, 2017;Joo et al, 2019b;Kossaifi et al, 2019;Chen et al, 2020;Khan et al, 2020;. The most frequent low-level annotations that the datasets provide are the participants' body poses and facial expressions (Douglas-Cowie et al, 2007;Rehg et al, 2013;Bilakhia et al, 2015;Vandeventer et al, 2015;Naim et al, 2015;Edwards et al, 2016;Cafaro et al, 2017;Feng et al, 2017;Georgakis et al, 2017;Paggio and Navarretta, 2017;Bozkurt et al, 2017;Andriluka et al, 2018;von Marcard et al, 2018;Mehta et al, 2018;Lemaignan et al, 2018;Joo et al, 2019b;Kossaifi et al, 2019;Schiphorst et al, 2020;Doyran et al, 2021;. Given their annotation complexity, they are usually automatically retrieved with tools like OpenPose (Cao et al, 2019), and manually fixed or discarded.…”
Section: Datasetsmentioning
confidence: 99%
“…Indeed, some of the datasets have been complementary annotated and added in posterior studies. As a result, most common high-level labels consist of elicited emotions (McCowan et al, 2005;Douglas-Cowie et al, 2007;van Son et al, 2008;McKeown et al, 2010;Naim et al, 2015;Vandeventer et al, 2015;Chou et al, 2017;Paggio and Navarretta, 2017;Maman et al, 2020;Doyran et al, 2021), action labels (Soomro et al, 2012;Yonetani et al, 2016;Silva et al, 2018;Abebe et al, 2018;Carreira et al, 2019;Zhao et al, 2019;Schiphorst et al, 2020;Monfort et al, 2020;Martín-Martín et al, 2021), and social cues/signals (Hung and Chittaranjan, 2010;Sanchez-Cortes et al, 2012;Ringeval et al, 2013;Vandeventer et al, 2015;Shukla et al, 2016;Bozkurt et al, 2017;Cafaro et al, 2017;Feng et al, 2017;Lemaignan et al, 2018;Cabrera-Quiros et al, 2018;Celiktutan et al, 2019;Chen et al, 2020;Maman et al, 2020).…”
Section: Datasetsmentioning
confidence: 99%