“…For a time series sample set x = x 1 , x 2 , ..., x t , ..., x T , the GRU computations are shown in following Eqs. (7,8,9,10,11,12) [9] (Fig. 3) We trained the sign gesture recognizer using the network depicted in Fig.…”
Section: Grumentioning
confidence: 99%
“…However, we aim at developing a system from raw data itself without extracting handmade features. Deep learning classifiers such as GRU (Gated recurrent unit) [8] and BiLSTM (Bidirectional Long Short Term Memory) [10] are good at modeling sequential data and have been applied in variety of applications such as handwriting [16], Natural Language Processing (NLP) [11,22], time-series [5], activity recognition [20]. Thus, we investigated them on the SLR data set developed by Kumar et al [15].…”
Sign Language Recognition (SLR) minimizes the communication gap when interacting with hearing impaired people, i.e. connects hearing impaired persons and those who require to communicate and don't understand SLR. This paper focuses on an end-to-end deep learning approach for the recognition of sign gestures recorded with a 3D sensor (e.g., Microsoft Kinect). Typical machine learning based SLR systems require feature extractions before applying machine learning models. These features need to be chosen carefully as the recognition performance heavily relies on them. Our proposed end-to-end approach eradicates this problem by eliminating the need to extract handmade features. Deep learning models can directly work on raw data and learn higher level representations (features) by themselves. To test our hypothesis, we have used two latest and promising deep learning models, Gated Recurrent Unit (GRU) and Bidirectional Long Short Term Memory (BiL-STM) and trained them using only raw data. We have performed comparative analysis among both models and also with the base paper results. Conducted experiments reflected that proposed method outperforms the existing work, where GRU successfully concluded with 70.78% average accuracy with front view training.
“…For a time series sample set x = x 1 , x 2 , ..., x t , ..., x T , the GRU computations are shown in following Eqs. (7,8,9,10,11,12) [9] (Fig. 3) We trained the sign gesture recognizer using the network depicted in Fig.…”
Section: Grumentioning
confidence: 99%
“…However, we aim at developing a system from raw data itself without extracting handmade features. Deep learning classifiers such as GRU (Gated recurrent unit) [8] and BiLSTM (Bidirectional Long Short Term Memory) [10] are good at modeling sequential data and have been applied in variety of applications such as handwriting [16], Natural Language Processing (NLP) [11,22], time-series [5], activity recognition [20]. Thus, we investigated them on the SLR data set developed by Kumar et al [15].…”
Sign Language Recognition (SLR) minimizes the communication gap when interacting with hearing impaired people, i.e. connects hearing impaired persons and those who require to communicate and don't understand SLR. This paper focuses on an end-to-end deep learning approach for the recognition of sign gestures recorded with a 3D sensor (e.g., Microsoft Kinect). Typical machine learning based SLR systems require feature extractions before applying machine learning models. These features need to be chosen carefully as the recognition performance heavily relies on them. Our proposed end-to-end approach eradicates this problem by eliminating the need to extract handmade features. Deep learning models can directly work on raw data and learn higher level representations (features) by themselves. To test our hypothesis, we have used two latest and promising deep learning models, Gated Recurrent Unit (GRU) and Bidirectional Long Short Term Memory (BiL-STM) and trained them using only raw data. We have performed comparative analysis among both models and also with the base paper results. Conducted experiments reflected that proposed method outperforms the existing work, where GRU successfully concluded with 70.78% average accuracy with front view training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.