2021
DOI: 10.1016/j.eswa.2020.114403
|View full text |Cite
|
Sign up to set email alerts
|

American sign language recognition and training method with recurrent neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 98 publications
(38 citation statements)
references
References 46 publications
0
29
0
Order By: Relevance
“…LSTM is a family of RNN to handle gradient vanishing, by substituting an extended bidirectional LSTM (BiLSTM) neurons [ 27 , 54 , 55 ]. BiLSTM neuron learn long-term dependencies between sequences [ 5 , 31 , 56 ]. Single BiLSTM unit return low accuracy especially when learning complex sequences.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…LSTM is a family of RNN to handle gradient vanishing, by substituting an extended bidirectional LSTM (BiLSTM) neurons [ 27 , 54 , 55 ]. BiLSTM neuron learn long-term dependencies between sequences [ 5 , 31 , 56 ]. Single BiLSTM unit return low accuracy especially when learning complex sequences.…”
Section: Methodsmentioning
confidence: 99%
“…However, it is demonstrated that and should be maximized and and minimized to better explore performance of selected features and to determine optimal multi-stacked deep BiLSTM recognition. The following metrics are most popular for deep neural network and provide the results of comparison [ 5 ].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recurrent Neural Networks (RNN) are network structures for which the inputs are time-series data and all the nodes are connected in a chain [73]. Unlike multi-layer perceptrons, the RNN have a sense of time and memory of earlier network states allowing them to learn sequences that vary over time [74] (see Figure 4). At present, the most commonly used RNNs are Long Short-Term Memory networks (LSTM) and Gated Recurrent Unit (GRU) networks.…”
Section: B Recurrent Neural Network Based Fault Diagnosismentioning
confidence: 99%