2021
DOI: 10.3389/fpsyg.2021.628728
|View full text |Cite
|
Sign up to set email alerts
|

Controlling Video Stimuli in Sign Language and Gesture Research: The OpenPoseR Package for Analyzing OpenPose Motion-Tracking Data in R

Abstract: Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movemen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Based on signal processing alone, we have detected systematic changes reflective of a linguistically maturing communication system, from continuous multi‐articulatory kinematics of silent gestures. We applied computer vision techniques to extract kinematics from video data (e.g., Östling, Börstell, & Courtaux, 2018 ; Ripperda et al., 2020 ; Trettenbrein & Zaccarella, 2021 ). We then quantified kinematic relationships between gestural utterances (Pouw & Dixon, 2019 ).…”
Section: Discussionmentioning
confidence: 99%
“…Based on signal processing alone, we have detected systematic changes reflective of a linguistically maturing communication system, from continuous multi‐articulatory kinematics of silent gestures. We applied computer vision techniques to extract kinematics from video data (e.g., Östling, Börstell, & Courtaux, 2018 ; Ripperda et al., 2020 ; Trettenbrein & Zaccarella, 2021 ). We then quantified kinematic relationships between gestural utterances (Pouw & Dixon, 2019 ).…”
Section: Discussionmentioning
confidence: 99%
“…Depending on the system and model, the general pose of the actor can be determined based on predefined key points such as the head, elbows, shoulders, or feet, but also the position and posture of the hands and fingers, facial movements, and gaze directions. These data can then be used to quantify a number of parameters such as the amount of motion of an actor or a particular body part [186]. In animal research, tools such as DeepLabCut [112] or SLEAP, revolutionized the ease with which researchers can track morphologically unique body poses in a wide range of animals.…”
Section: Most Tools Are Accessible If Video Footage Of the Communicat...mentioning
confidence: 99%
“…erefore, taking the modified human posture model as reference, this paper created a new deep neural network model to estimate the two-dimensional coordinates of human skeleton points of badminton players in a single frame image. Its architecture is shown in Figure 2 [13].…”
Section: Estimation Of Human Key Bone Points Based On Optimization Mo...mentioning
confidence: 99%