Across the world, several millions of people use sign language as their main way of communication with their society, daily they face a lot of obstacles with their families, teachers, neighbours, employers. According to the most recent statistics of World Health Organization, there are 360 million persons in the world with disabling hearing loss i.e. (5.3% of the world's population), around 13 million in the Middle East. Hence, the development of automated systems capable of translating sign languages into words and sentences becomes a necessity. We propose a model to recognize both of static gestures like numbers, letters, ...etc and dynamic gestures which includes movement and motion in performing the signs. Additionally, we propose a segmentation method in order to segment a sequence of continuous signs in real time based on tracking the palm velocity and this is useful in translating not only pre-segmented signs but also continuous sentences. We use an affordable and compact device called Leap Motion controller, which detects and tracks the hands' and fingers' motion and position in an accurate manner. The proposed model applies several machine learning algorithms as Support Vector Machine (SVM), K-Nearest Neighbour (KNN), Artificial Neural Network (ANN) and Dynamic Time Wrapping (DTW) depending on two different features sets. This research will increase the chance for the Arabic hearing-impaired and deaf persons to communicate easily using Arabic Sign language(ArSLR). The proposed model works as an interface between hearing-impaired and normal persons who are not familiar with Arabic sign language, overcomes the gap between them and it is also valuable for social respect. The proposed model is applied on Arabic signs with 38 static gestures (28 letters, numbers (1:10) and 16 static words) and 20 dynamic gestures. Features selection process is maintained and we get two different features sets. For static gestures, KNN model dominates other models for both of palm features set and bone features set with accuracy 99 and 98% respectively. For dynamic gestures, DTW model dominates other models for both palm features set and bone features set with accuracy 97.4% and 96.4% respectively.
Human-Computer Interaction (HCI) refers to the interaction between the computers and human. One of the most important applications of (HCI) is sign language recognition. Several research works aimed to interpret and translate the sign language to a spoken language to help the hear impaired persons in integrating with communities. Sign language is the main way of communication for the deaf persons and hearing impaired which enable them to communicate with their societies and between each other's. According to the World Health Organization, there are 466 million hearing loss people (i.e. 5% of the world population), 432 million or 93% of them are adults and 34 million or 17% of them are children. The hearing-impaired persons often need to the same level of mental capability as the normal persons. The main problem is the most of the hearing persons who cannot understand the sign language and most of hearing impaired cannot read or write our spoken language, this represents a barrier between the deaf persons and their societies so developing an automatic sign language recognition system is very necessary. This research introduces dynamic Arabic Sign Language recognition system using Microsoft Kinect. The recognition depends on using two machine learning algorithms (a) Decision Tree and (b) Bayesian Network then applied Ada-Boosting technique to enhance the recognition of the system, we compared the results with two direct matching techniques: (a) Dynamic Time Wrapping and Hidden Markov Model the system was applied on 42 Arabic gestures related to medical field. The experimental results showed that the proposed system recognition rate reached 91.18% for Decision Tree classifier, 92.50% for Bayesian classifier and 93.7% after applying Ada-Boosting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.