Proceedings of the 2nd International Conference on Image and Graphics Processing 2019
DOI: 10.1145/3313950.3313967
|View full text |Cite
|
Sign up to set email alerts
|

Korean sign language recognition based on image and convolution neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 5 publications
0
10
0
Order By: Relevance
“…Similar to action recognition, some recent works [56,35] use CNNs to extract the holistic features from image frames and then use the extracted features for classification. Several approaches [37,36] first extract body keypoints and then concatenate their locations as a feature vector.…”
Section: Sign Language Recognition Approachesmentioning
confidence: 99%
“…Similar to action recognition, some recent works [56,35] use CNNs to extract the holistic features from image frames and then use the extracted features for classification. Several approaches [37,36] first extract body keypoints and then concatenate their locations as a feature vector.…”
Section: Sign Language Recognition Approachesmentioning
confidence: 99%
“…ISLR shares a lot of features with action recognition, and consequently there are several works using CNNs for feature extraction and classification [ 32 , 33 , 34 , 35 ]. Recent work has also relied on employing 3D-CNNs [ 36 , 37 ] to capture spatiotemporal information in an ensemble way.…”
Section: Related Workmentioning
confidence: 99%
“…Apart from these, we have also studied Chinese Sign Language (Chai et al, 2013), German Sign Language (Dreuw, Deselaers, Keysers, & Ney, 2006), Korean Sign Language (Shin, Kim, & Jang, 2019), Japanese Sign Language (Sako & Kitamura, 2013), etc., to build a robust system that can recognize sign language accurately.…”
Section: Other Sign Languagesmentioning
confidence: 99%