2021
DOI: 10.3991/ijoe.v17i01.18585
|View full text |Cite
|
Sign up to set email alerts
|

Web Based Recognition and Translation of American Sign Language with CNN and RNN

Abstract: Individuals with hearing hindrance utilize gesture based communication to exchange their thoughts. Generally hand movements are used by them to communicate among themselves. But there are certain limitations when they communicate with other people who cannot understand these hand movements. There is a need to have a mechanism that can act as a translator between these people to communicate. It would be easier for these people to interact if there exists direct infrastructure that is able to convert signs to te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 9 publications
(10 reference statements)
0
7
0
Order By: Relevance
“…A twodimensional convolution with a kernel size of 3x3 is applied to the input RGB image of 224x224 pixels during the training phase. The layers that make up the VGG16 network have the following specifications (Bendarkar et al, 2021) (Maysanjaya, 2020)…”
Section: White Blood Cells Datasetmentioning
confidence: 99%
“…A twodimensional convolution with a kernel size of 3x3 is applied to the input RGB image of 224x224 pixels during the training phase. The layers that make up the VGG16 network have the following specifications (Bendarkar et al, 2021) (Maysanjaya, 2020)…”
Section: White Blood Cells Datasetmentioning
confidence: 99%
“…Most alphabet and number recognition are in static action. Like Al-Amin (2017); Bendarkar et al (2021) studies in the American Sign Language recognition system, all the alphabets are in static action, except j and z alphabet, which are in dynamic action.…”
Section: Referencementioning
confidence: 99%
“…Non-signers, i.e., people who are not familiar with sign language can communicate with deaf people who speak using the translators of sign language into text or speech (Truong et al, 2016 ). These translators predominantly use ML algorithms to find the correct sign, like convolutional and recurrent neural networks (Bendarkar et al, 2021 ) or deep learning (Bantupalli and Xie, 2018 ). A very promising human-machine interface (HMI) device are communication gloves, which have sensors that interpret the motions of sign languages into natural language combining virtual and augmented reality with AAC (Ozioko and Dahiya, 2022 ).…”
Section: Ai Technologies That Support Communication and Learning Assi...mentioning
confidence: 99%