2022
DOI: 10.1007/s10579-022-09586-4
|View full text |Cite|
|
Sign up to set email alerts
|

Semi-automation of gesture annotation by machine learning and human collaboration

Abstract: Gesture and multimodal communication researchers typically annotate video data manually, even though this can be a very time-consuming task. In the present work, a method to detect gestures is proposed as a fundamental step towards a semi-automatic gesture annotation tool. The proposed method can be applied to RGB videos and requires annotations of part of a video as input. The technique deploys a pose estimation method and active learning. In the experiment, it is shown that if about 27% of the video is annot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 64 publications
(62 reference statements)
0
1
0
Order By: Relevance
“…In terms of gesture annotation, combining (automatic) motion-tracking data with manual annotation allows labelers to achieve consistent measures of time points when an individual gesture begins or ends. Recent effort has focused on using automatic annotation tools to speed up the annotation process (e.g., SPUDNIG [163]; the annotation tool from [71]). While such automatic systems have revealed high reliability with human coders identifying moments of movement (i.e., gesturing) and moments of rest, there is still much work to be done with regard to automatically assessing more nuanced aspects of individual gestures (in terms of type or function with regards to speech).…”
Section: Manual Annotation and Existing Multimodal Corporamentioning
confidence: 99%
“…In terms of gesture annotation, combining (automatic) motion-tracking data with manual annotation allows labelers to achieve consistent measures of time points when an individual gesture begins or ends. Recent effort has focused on using automatic annotation tools to speed up the annotation process (e.g., SPUDNIG [163]; the annotation tool from [71]). While such automatic systems have revealed high reliability with human coders identifying moments of movement (i.e., gesturing) and moments of rest, there is still much work to be done with regard to automatically assessing more nuanced aspects of individual gestures (in terms of type or function with regards to speech).…”
Section: Manual Annotation and Existing Multimodal Corporamentioning
confidence: 99%
“…El español no es una excepción, por lo que varios autores emplean el mismo término para englobar diferentes géneros orales (Alcaraz Varó, 2000;Moyano, 2001;Parodi et al, 2009). En segundo lugar, la codificación y análisis de datos de un discurso oral consume una gran cantidad de tiempo y esfuerzo (Ienaga et al, 2022;Kaur & Ali, 2018). La mayor accesibilidad a equipos de grabación de calidad y económicos, así como el desarrollo de softwares específicos para el análisis de discursos orales (O'Halloran, 2012;Pouw et al, 2020) ha facilitado progresivamente los estudios en esta área.…”
Section: Los Géneros Discursivosunclassified
“…2. La recogida de datos de los discursos orales, su transcripción y su análisis supone una gran cantidad de esfuerzo y tiempo para el investigador (Ienaga et al, 2022;Kaur & Ali, 2018). La paulatina accesibilidad a equipos de grabación de calidad y económicos, así como el desarrollo de softwares específicos para el análisis de discursos orales (O'Halloran, 2012;Pouw et al, 2020) ha facilitado los estudios en esta área.…”
Section: Los Géneros Académicos Oralesunclassified
See 2 more Smart Citations