Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached for the seen and for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
The loss of hand profoundly affects an individual's quality of life. Prosthetic hands can provide a route to functional rehabilitation by allowing the amputees to undertake their daily activities. However, the performance of current artificial hands falls well short of the dexterity that natural hands offer. The aim of this study is to test whether an intelligent vision system could be used to enhance the grip functionality of prosthetic hands. To this end, a convolutional neural network (CNN) deep learning architecture was implemented to classify the objects in the COIL100 database in four basic grasp groups: tripod, pinch, palmar and palmar with wrist rotation. Our preliminary, yet promising, results suggest that the additional machine vision system can provide prosthetic hands with the ability to detect object and propose the user an appropriate grasp.
Cloud computing faces more security threats, requiring better security measures. This paper examines the various classification and categorization schemes for cloud computing security issues, including the widely known CIA trinity (confidentiality, integrity, and availability), by considering critical aspects of the cloud, such as service models, deployment models, and involved parties. A comprehensive comparison of cloud security classifications constructs an exhaustive taxonomy. ISO27005, NIST SP 800-30, CRAMM, CORAS, OCTAVE Allegro, and COBIT 5 are rigorously compared based on their applicability, adaptability, and suitability within a cloud-based hosting methodology. The findings of this research recommend OCTAVE Allegro as the preferred cloud hosting paradigm. With many security models available in management studies, it is imperative to identify those suitable for the rapidly expanding and dynamically evolving cloud environment. This study underscores the significant methods for securing data on cloud-hosting platforms, thereby contributing to establishing a robust cloud security taxonomy and hosting methodology.
Hunting behavior of the Burrowing-Owl, Athene cunicularia (Molina, 1782) (Aves: Strigiformes) in an urban environment in Belo Horizonte, Minas Gerais, Brazil.The burrowing-owl, Athene cunicularia (Molina, 1782) has a wide distribution, which extends from western North America to southern South America. They are an opportunistic plunderer species preying on insects and small mammals. Data on hunting tactics of this specie are scarce in the literature. Here, the general and hunting behaviors of the species are described in an urban environment, a university campus, in the city of Belo Horizonte, Minas Gerais state, Brazil. An adult male of A. cunicularia had its behavior studied during 14 days, in July 2005, totaling 20 hours of systematic observations using the instantaneous focal-animal method. The observations were done using 10x50 binoculars, between 06:00 and 09:00 and after 16:00, from a minimal distance of 25 m. We identified and described five behavioral categories: self-maintenance, locomotion, hunting, non-agonistic social and alert. No difference was found between behaviors displayed during morning and afternoon periods. Four hunting strategies displayed by A. cunicularia were identified (hunting in soil, air hunting, perch to soil hunting and hover flight hunting) but no difference between their frequencies was found. These behaviors were observed only after 18:00. These non-significant patterns suggest that the oportunistic hunting behavior of the birds of prey ("sit-and-wait") and the active search hunting patterns are alternated according to opportunity, identification and capture of the prey and the place of capture.
Humans excel in grasping and manipulating objects because of their life-long experience and knowledge about the 3D shape and weight distribution of objects. However, the lack of such intuition in robots makes robotic grasping an exceptionally challenging task. There are often several equally viable options of grasping an object. However, this ambiguity is not modeled in conventional systems that estimate a single, optimal grasp position. We propose to tackle this problem by simultaneously estimating multiple grasp poses from a single RGB image of the target object. Further, we reformulate the problem of robotic grasping by replacing conventional grasp rectangles with grasp belief maps, which hold more precise location information than a rectangle and account for the uncertainty inherent to the task. We augment a fully convolutional neural network with a multiple hypothesis prediction model that predicts a set of grasp hypotheses in under 60 ms, which is critical for real-time robotic applications. The grasp detection accuracy reaches over 90% for unseen objects, outperforming the current state of the art on this task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.