Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems 2018
DOI: 10.1145/3264746.3264805
|View full text |Cite
|
Sign up to set email alerts
|

Sign language recognition with recurrent neural network using human keypoint detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 36 publications
(16 citation statements)
references
References 9 publications
0
16
0
Order By: Relevance
“…Similar to action recognition, some recent works [56,35] use CNNs to extract the holistic features from image frames and then use the extracted features for classification. Several approaches [37,36] first extract body keypoints and then concatenate their locations as a feature vector. The extracted features are then fed into a stacked GRU for recognizing signs.…”
Section: Sign Language Recognition Approachesmentioning
confidence: 99%
“…Similar to action recognition, some recent works [56,35] use CNNs to extract the holistic features from image frames and then use the extracted features for classification. Several approaches [37,36] first extract body keypoints and then concatenate their locations as a feature vector. The extracted features are then fed into a stacked GRU for recognizing signs.…”
Section: Sign Language Recognition Approachesmentioning
confidence: 99%
“…Pigou et al use a 2D CNN for sign recognition on Dutch and Flemish sign language [23]. With the advent of off-the-shelf pre-trained human pose estimation systems such as OpenPose [8], several sign language recognition researchers applied recurrent neural networks using keypoints as input features [18,17,10]. However, because movements in sign language can be quick (leading to motion blur), and because there is occlusion between and within the hands, these keypoints can be noisy [10].…”
Section: Isolated Sign Recognitionmentioning
confidence: 99%
“…They have proposed to utilize the seq2seq models to learn how to translate the spatio-temproal representation of signs into the spoken or written language. Recently, researchers developed a simple sign language recognition system based on bidirectional GRUs which just classifies a given sign language video into one of the classes that are predetermined [30].…”
Section: Related Workmentioning
confidence: 99%