Abstract-This paper proposes a complete skeleton of isolated Video Based Indian Sign Language Recognition System (INSLR) that integrates various image processing techniques and computational intelligence techniques in order to deal with sentence recognition. The system is developed to improve communication between hearing impaired people and normal people promising them better social prospects. A wavelet based video segmentation technique is proposed which detects shapes of various hand signs and head movement in video based setup. Shape features of hand gestures are extracted using elliptical Fourier descriptions which to the highest degree reduces the feature vectors for an image. Principle component analysis (PCA) still minimizes the feature vector for a particular gesture video and the features are not affected by scaling or rotation of gestures within a video which makes the system more flexible. Features generated using these techniques makes the feature vector unique for a particular gesture. Recognition of gestures from the extracted features is done using a Sugeno type fuzzy inference system which uses linear output membership functions. Finally the INSLR system employs an audio system to play the recognized gestures along with text output. The system is tested using a data set of 80 words and sentences by 10 different signers. The experimental results show that our system has a recognition rate of 96%.Index Terms-Indian sign language, fuzzy inference system, wavelet transform, canny edge operator, image fusion, elliptical fourier descriptors, principle component analysis.
I. INTRODUCTIONThe sign language is natural language used for communication by hearing impaired people. A sign language relates letters, words, and sentences of a spoken language to hand signs and human body gestures facilitating hearing impaired people to communicate among themselves. Sign language recognition systems provide a channel for communication between hearing impaired people and normal people. By making this system fully realizable can create jobs for hearing impaired people in different areas of their interest. Advances in sign language recognition can largely promote research in the areas of human computer interface. This paper provides a novel technique to recognize signs of Indian sign language using wavelet transform and fuzzy inference system.The principal constituent of any sign language recognition system is hand gestures and shapes normally used by deaf people to communicate among themselves. A gesture is defined as a energetic movement of hands and creating signs with them such as alphabets, numbers, words and sentences. Gestures are classified into two type static gesture and dynamic gestures. Static gesture refer to certain pattern of hand and finger orientation where as dynamic gestures involve different movement and orientation of hands and face expressions largely used to recognize continuous stream of sentences. Our method of gesture recognition is a vision based technique which does not use motion sensor gl...
Abstract-This paper summarizes various algorithms used to design a sign language recognition system. Sign language is the language used by deaf people to communicate among themselves and with normal people. We designed a real time sign language recognition system that can recognize gestures of sign language from videos under complex backgrounds. Segmenting and tracking of non-rigid hands and head of the signer in sign language videos is achieved by using active contour models. Active contour energy minimization is done using signers hand and head skin colour, texture, boundary and shape information. Classification of signs is done by an artificial neural network using error back propagation algorithm. Each sign in the video is converted into a voice and text command. The system has been implemented successfully for 351 signs of Indian Sign Language under different possible video environments. The recognition rates are calculated for different video environments.
One of the main obstacles states that the widespread use of phonocardiogram (PCG) in modern day's medicine is the various noise components they invariably contain. Although many advances have been made towards automated heart sound segmentation and heart pathology detection and classification, an efficient method for noise handling would come as a major aid for further development in this field, especially when it comes to working with PCGs collected in realistic environments such as hospitals and clinics. The feature extraction has been gone through 10 levels on PCG recorded signals using transformation techniques. Analyzing PCG signals with calculating parameters Energy, Standard deviation, Variance, Mean square error (MSE), Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Maximum Entropy (ME) values of human heart signal which were extracted from Phonocardiogram were calculated. These calculations are based on the filtration process. Wavelets considered as filtration technique as well as under goes 10 leveling factors. Different wavelets compared for analysis part such as Haar, Daubechies, Orthogonal, Coiflets and Biorthogonal and also finding histograms and denoising the signal were part in this proposed scheme using wave menu analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.