2022
DOI: 10.3390/s22124558
|View full text |Cite
|
Sign up to set email alerts
|

Spatial Attention-Based 3D Graph Convolutional Neural Network for Sign Language Recognition

Abstract: Sign language is the main channel for hearing-impaired people to communicate with others. It is a visual language that conveys highly structured components of manual and non-manual parameters such that it needs a lot of effort to master by hearing people. Sign language recognition aims to facilitate this mastering difficulty and bridge the communication gap between hearing-impaired people and others. This study presents an efficient architecture for sign language recognition based on a convolutional graph neur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 43 publications
(51 reference statements)
0
16
0
Order By: Relevance
“…Comparatively, the proposed Two-Stream GCN model significantly outperformed existing models, achieving 34.41% accuracy. Specifically, ST-GCN [22] generated 16.48%, and the 3DGCN [27] model, which only utilized spatial context information, achieved 25.05%, demonstrating the superiority of our proposed approach. These results emphasize the effectiveness of the Two-Stream GCN model in addressing the challenges posed by largescale sign language datasets.…”
Section: G Performance Accuracy With Asllvd Datasetmentioning
confidence: 79%
See 4 more Smart Citations
“…Comparatively, the proposed Two-Stream GCN model significantly outperformed existing models, achieving 34.41% accuracy. Specifically, ST-GCN [22] generated 16.48%, and the 3DGCN [27] model, which only utilized spatial context information, achieved 25.05%, demonstrating the superiority of our proposed approach. These results emphasize the effectiveness of the Two-Stream GCN model in addressing the challenges posed by largescale sign language datasets.…”
Section: G Performance Accuracy With Asllvd Datasetmentioning
confidence: 79%
“…The proposed Two-Stream GCN model exhibits a top accuracy of 34.41%, setting a benchmark for largescale sign language datasets. To contextualize this result, a comparative analysis was performed with existing models, specifically ST-GCN [22] and 3DGCN [27]. Hammadi et al introduced a recent method for ASLLVD data, employing a graph-based convolutional neural network with separable 3DGCN layers and a spatial attention mechanism [27].…”
Section: G Performance Accuracy With Asllvd Datasetmentioning
confidence: 99%
See 3 more Smart Citations