An important contribution to computer vision applications has been made by recognizing human emotion. Although it is very significant, this work considers the security of autistic people while in meltdown crisis by introducing a new system to warn caregivers through facial expressions detection. A precautionary approach has been taken to deal with meltdown crisis. Certainly, the indications of Meltdown are linked to abnormal facial expressions related to compound emotions. Actually, researchers thought long ago that Human Facial Expressions (HFE) are not able to express more than the seven basics emotions. HFE have been considered by psychologists as very complicated one, which can indicate two or even more emotions known as compound or mixed ones. A few studies have been done concerning Compound Emotion (CE). As well as, many difficult tasks to detect Compound Emotion Recognition (CER). In this paper, we empirically assess a group of deep spatio-temporal geometric features of microexpressions of autistic children during a meltdown crisis. To achieve this goal, we make a comparison of the CER performance and diverse collections of micro-expressions features to select the features which best differentiates autistic children CE in meltdown crisis from normal state, and the best classifier performance. We record autistic children videos in normal and meltdown crisis using Kinect camera in serious circumstances. The experimental evaluation shows that the deep spatio-temporal geometric features and Recurrent Neural Network RNN with 3 hidden layer using Information Gain Feature Selection methods provide best performance (85.8%). INDEX TERMS Autism, deep spatio-temporal features, meltdown crisis, facial expressions, compound emotions.
How to navigate safely, recognize encountered obstacles, and move independently from one location to another in unknown environments are some of the challenges that face visually impaired people. By proposing a solution towards overcoming these challenges, this work will be of most importance to visually impaired people. In this work, we propose a consistent, reliable and robust smartphone-based method to classify obstacles in unknown environments from partial visual information based on computer vision and machine learning techniques. Our proposed method handles high levels of noise and bad resolution in frames captured from a phone camera. In addition, our proposed method offers maximum flexibility to users and use the least expensive equipment possible. Moreover, by leveraging on deep-learning techniques, the proposed method enables semantic categorization in order to classify obstacles and increase the awareness of the explored environment. The efficiency of the work has been experimentally measured on a variety of experiments studies on different complex scenes. It records high accuracy of [90.2 % ]. INDEX TERMS Image analysis, image classification, supervised learning, mobile applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.