The nuclear industry has some of the most extreme environments in the world, with radiation levels and extremely harsh conditions restraining human access to many facilities. One method for enabling minimal human exposure to hazards under these conditions is through the use of gloveboxes that are sealed volumes with controlled access for performing handling. While gloveboxes allow operators to perform complex handling tasks, they put operators at considerable risk from breaking the confinement and, historically, serious examples including punctured gloves leading to lifetime doses have occurred. To date, robotic systems have had relatively little impact on the industry, even though it is clear that they offer major opportunities for improving productivity and significantly reducing risks to human health. This work presents the challenges of robotic and AI solutions for nuclear gloveboxes, and introduces a step forward for bringing cutting-edge technology to gloveboxes. The problem statement and challenges are highlighted and then an integrated demonstrator is proposed for robotic handling in nuclear gloveboxes for nuclear material handling. The proposed approach spans from tele-manipulation to shared autonomy, computer vision solutions for robotic manipulation to machine learning solutions for condition monitoring.
Hearing impaired individuals use sign languages to communicate with others within the community. Because of the wide spread use of this language, hard-of-hearing individuals can easily understand it but it is not known by a lot of normal people. In this paper a hand gesture recognition system has been developed to overcome this problem, for those who don't recognize sign language to communicate simply with hard-of-hearing individuals. In this paper a computer vision-based system is designed to detect sign Language. Datasets used in this paper are binary images. These images are given to the convolution neural network (CNN). This model extracts the features of the image and classifies the images, and it recognises the gestures. The gestures used in this paper are of American Sign Language. In real time system the images are converted to binary images using Hue, Saturation, and Value (HSV) colour model. In this model 87.5% of data is used for training and 12.5% of data is used for testing and the accuracy obtained with this model is 97%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.