Every year, a significant number of people lose a body part in an accident, through sickness or in high-risk manual jobs. Several studies and research works have tried to reduce the constraints and risks in their lives through the use of technology. This work proposes a learning-based approach that performs gesture recognition using a surface electromyography-based device, the Myo Armband released by Thalmic Labs, which is a commercial device and has eight non-intrusive low-cost sensors. With 35 able-bodied subjects, and using the Myo Armband device, which is able to record data at about 200 MHz, we collected a dataset that includes six dissimilar hand gestures. We used a gated recurrent unit network to train a system that, as input, takes raw signals extracted from the surface electromyography sensors. The proposed approach obtained a 99.90% training accuracy and 99.75% validation accuracy. We also evaluated the proposed system on a test set (new subjects) obtaining an accuracy of 77.85%. In addition, we showed the test prediction results for each gesture separately and analyzed which gestures for the Myo armband with our suggested network can be difficult to distinguish accurately. Moreover, we studied for first time the gated recurrent unit network capability in gesture recognition approaches. Finally, we integrated our method in a system that is able to classify live hand gestures.
In recent years the advances in Artificial Intelligence (AI) have been seen to play an important role in human well-being, in particular enabling novel forms of human-computer interaction for people with a disability. In this paper, we propose a sEMG-controlled 3D game that leverages a deep learning-based architecture for real-time gesture recognition. The 3D game experience developed in the study is focused on rehabilitation exercises, allowing individuals with certain disabilities to use low-cost sEMG sensors to control the game experience. For this purpose, we acquired a novel dataset of seven gestures using the Myo armband device, which we utilized to train the proposed deep learning model. The signals captured were used as an input of a Conv-GRU architecture to classify the gestures. Further, we ran a live system with the participation of different individuals and analyzed the neural network’s classification for hand gestures. Finally, we also evaluated our system, testing it for 20 rounds with new participants and analyzed its results in a user study.
In this work, we propose the first study of a technical validation of an assistive robotic platform, which has been designed to assist people with neurodevelopmental disorders. The platform is called LOLA2 and it is equipped with an artificial intelligence-based application to reinforce the learning of daily life activities in people with neurodevelopmental problems. LOLA2 has been integrated with an ROS-based navigation system and a user interface for healthcare professionals and their patients to interact with it. Technically, we have been able to embed all these modules into an NVIDIA Jetson Xavier board, as well as an artificial intelligence agent for online action detection (OAD). This OAD approach provides a detailed report on the degree of performance of a set of daily life activities that are being learned or reinforced by users. All the human–robot interaction process to work with users with neurodevelopmental disorders has been designed by a multidisciplinary team. Among its main features are the ability to control the robot with a joystick, a graphical user interface application that shows video tutorials with the activities to reinforce or learn, and the ability to monitor the progress of the users as they complete tasks. The main objective of the assistive robotic platform LOLA2 is to provide a system that allows therapists to track how well the users understand and perform daily tasks. This paper focuses on the technical validation of the proposed platform and its application. To do so, we have carried out a set of tests with four users with neurodevelopmental problems and special physical conditions under the supervision of the corresponding therapeutic personnel. We present detailed results of all interventions with end users, analyzing the usability, effectiveness, and limitations of the proposed technology. During its initial technical validation with real users, LOLA2 was able to detect the actions of users with disabilities with high precision. It was able to distinguish four assigned daily actions with high accuracy, but some actions were more challenging due to the physical limitations of the users. Generally, the presence of the robot in the therapy sessions received excellent feedback from medical professionals as well as patients. Overall, this study demonstrates that our developed robot is capable of assisting and monitoring people with neurodevelopmental disorders in performing their daily living tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.