The speech and hearing-impaired community use sign language as the primary means of communication. It is quite challenging for the general population to interpret or learn sign language completely. A sign language recognition system must be designed and developed to address this communication barrier. Most current sign language recognition systems rely on wearable sensors, keeping the recognition system unaffordable for most individuals. Moreover, the existing vision-based sign recognition frameworks do not consider all of the spatial and temporal information required for accurate recognition. A novel vison-based hybrid deep neural net methodology is proposed in this study for recognizing Indian and Russian sign gestures. The proposed framework is aimed to establish a single framework for tracking and extracting multisemantic properties, such as non-manual components and manual co-articulations. Furthermore, spatial feature extraction from the sign gestures is deployed using a 3D deep neural net with atrous convolutions. The temporal and sequential feature extraction is carried out by employing attention-based Bi-LSTM. In addition, the distinguished abstract feature extraction is done using the modified autoencoders. The discriminative feature extraction for differentiating the sign gestures from unwanted transition gestures is done by leveraging the hybrid attention module. The experimentation of the proposed model has been carried out on the novel multi-signer Indo-Russian sign language dataset. The proposed sign language recognition framework with hybrid neural net yields better results than other state-of-the-art frameworks.
The Sign Language Recognition system intends to recognize the Sign language used by the hearing and vocally impaired populace. The interpretation of isolated sign language from static and dynamic gestures is a difficult study field in machine vision. Managing quick hand movement, facial expression, illumination variations, signer variation, and background complexity are amongst the most serious challenges in this arena. While deep learning-based models have been used to accomplish the entirety of the field's state-of-the-art outcomes, the previous issues have not been fully addressed. To overcome these issues, we propose a Hybrid Neural Network Architecture for the recognition of Isolated Indian and Russian Sign Language. In the case of static gesture recognition, the proposed framework deals with the 3D Convolution Net with an atrous convolution mechanism for spatial feature extraction. For dynamic gesture recognition, the proposed framework is an integration of semantic spatial multi-cue feature detection, extraction, and Temporal-Sequential feature extraction. The semantic spatial multi-cue feature detection and extraction module help in the generation of feature maps for Full-frame, pose, face and hand. For face and hand detection, GradCam and Camshift algorithm have been used. The temporal and sequential module consists of a modified auto-encoder with a GELU activation function for abstract high-level feature extraction and a hybrid attention layer. The hybrid attention layer is an integration of segmentation and spatial attention mechanism. The proposed work also involves creating a novel multi-signer, single and double-handed Isolated Sign representation dataset for Indian and Russian Sign Language. The experimentation was done on the novel dataset created. The accuracy obtained for Static Isolated Sign Recognition was 99.76%, and the accuracy obtained for Dynamic Isolated Sign Recognition was 99.85%. We have also compared the performance of our proposed work with other baseline models with benchmark datasets, and our proposed work proved to have better performance in terms of Accuracy metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.