The inability to move the muscles of the face on one or both sides is known as facial paralysis, which may affect the ability of the patient to speak, blink, swallow saliva, eat, or communicate through natural facial expressions. The well-being of the patient could also be negatively affected. Computer-based systems as a means to detect facial paralysis are important in the development of standardized tools for medical assessment, treatment, and monitoring; additionally, they are expected to provide user-friendly tools for patient monitoring at home. In this work, a methodology to detect facial paralysis in a face photograph is proposed. A system consisting of three modules—facial landmark extraction, facial measure computation, and facial paralysis classification—was designed. Our facial measures aim to identify asymmetry levels within the face elements using facial landmarks, and a binary classifier based on a multi-layer perceptron approach provides an output label. The Weka suite was selected to design the classifier and implement the learning algorithm. Tests on publicly available databases reveal outstanding classification results on images, showing that our methodology that was used to design a binary classifier can be expanded to other databases with great results, even if the participants do not execute similar facial expressions.
Falls on the stairs are a common cause of accidental injury among the older adults. Understanding the mechanisms leading to such accidents may improve not only the prevention of falls, but also support independent living among elderly. Thus, a method to automatically detect falls and other abnormal events on stairs is presented and empirically validated. Automatic fall detection will also assist in data collection for environmental design improvements and fall prevention. Real-time 3D joint tracking information, provided by a Microsoft Kinect, is used to estimate the walking speed and to extract a set of features that encode human motion during stairway descent. Supervised learning algorithms, trained on manually labelled training data simulated in a home laboratory, obtained a high detection accuracy rate of ∼ 92% in leave-onesubject-out cross validation. In contrast with previous research, which identified visual tracking of the feet as the best indicator of dangerous activity, 3D motion of the hips is experimentally shown to be the most informative component in detecting abnormal events in the 3D tracking data provided by the Kinect.
The incapability to move the facial muscles is known as facial palsy, and it affects various abilities of the patient, for example, performing facial expressions. Recently, automatic approaches aiming to diagnose facial palsy using images and machine learning algorithms have emerged, focusing on providing an objective evaluation of the paralysis severity. This research proposes an approach to analyze and assess the lesion severity as a classification problem with three levels: healthy, slight, and strong palsy. The method explores the use of regional information, meaning that only certain areas of the face are of interest. Experiments carrying on multi-class classification tasks are performed using four different classifiers to validate a set of proposed hand-crafted features. After a set of experiments using this methodology on available image databases, great results are revealed (up to 95.61% of correct detection of palsy patients and 95.58% of correct assessment of the severity level). This perspective leads us to believe that the analysis of facial paralysis is possible with partial occlusions if face detection is accomplished and facial features are obtained adequately. The results also show that our methodology is suited to operate with other databases while attaining high performance, even though the image conditions are different and the participants do not perform equivalent facial expressions.
Humans express their emotions verbally and through actions, and hence emotions play a fundamental role in facial expressions and body gestures. Facial expression recognition is a popular topic in security, healthcare, entertainment, advertisement, education, and robotics. Detecting facial expressions via gesture recognition is a complex and challenging problem, especially in persons who suffer face impairments, such as patients with facial paralysis. Facial palsy or paralysis refers to the incapacity to move the facial muscles on one or both sides of the face. This work proposes a methodology based on neural networks and handcrafted features to recognize six gestures in patients with facial palsy. The proposed facial palsy gesture recognition system is designed and evaluated on a publicly available database with good results as a first attempt to perform this task in the medical field. We conclude that, to recognize facial gestures in patients with facial paralysis, the severity of the damage has to be considered because paralyzed organs exhibit different behavior than do healthy ones, and any recognition system must be capable of discerning these behaviors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.