Background A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields.
This paper proposes a fast and robust multi-people tracking algorithm for mobile platforms equipped with a RGB-D sensor. Our approach features an efficient point cloud depth-based clustering, an HOG-like classification to robustly initialize a person tracking and a person classifier with online learning to manage the person ID matching even after a full occlusion. For people detection, we make the assumption that people move on a ground plane. Tests are presented on a challenging real-world indoor environment and results have been evaluated with the CLEAR MOT metrics. Our algorithm proved to correctly track 96% of people with very limited ID switches and few false positives, with an average frame rate of 25 fps. Moreover, its applicability to robot-people following tasks have been tested and discussed
Industry 4.0 aims to make collaborative robotics accessible and effective inside factories. Human–robot interaction is enhanced by means of advanced perception systems which allow a flexible and reliable production. We are one of the contenders of a challenge with the intent of improve cooperation in industry. Within this competition, we developed a novel visual servoing system, based on a machine learning technique, for the automation of the winding of copper wire during the production of electric motors. Image-based visual servoing systems are often limited by the speed of the image processing module that runs at a frequency on the order of magnitude lower with respect to the robot control speed. In this article, a solution to this problem is proposed: the visual servoing function is synthesized using the Gaussian mixture model (GMM) machine learning system, which guarantees an extremely fast response. Issues related to data size reduction and collection of the data set needed to properly train the learner are discussed, and the performance of the proposed method is compared against the standard visual servoing algorithm used for training the GMM. The system has been developed and tested for a path following application on an aluminium bar to simulate the real stator teeth of a generic electric motor. Experimental results demonstrate that the proposed method is able to reproduce the visual servoing function with a minimal error while guaranteeing extremely high working frequency
Lacrimal function was studied in 30 patients treated for glaucoma, with 0.25% timolol eye drops. Rose bengal and fluorescein staining disclosed punctate epithelial defects in 11 eyes after one week. During the following weeks there defects disappeared spontaneously in most eyes. Schirmer tests (I and II), tear lysozyme and pre-corneal film break-up time were significantly decreased by the treatment, while tear immunoglobulins were unimpaired. The authors conclude that topical timolol treatment decreases tear production. This effect is quantitatively limited and does not appear dangerous for normal eyes, although it may become so for eyes with an originally low lacrimal secretion.
This paper focuses on the key role played by the adop on of a framework in teaching robo cs with a computer science approach in the master in Computer Engineering. The framework adopted is the Robot Opera ng System (ROS), which is becoming a standard de facto inside the robo cs community. The educa onal ac vi es proposed in this paper are based on a construc onist approach. The Mindstorms NXT robot kit is adopted to trigger the learning challenge. The ROS framework is exploited to drive the students programming methodology during the laboratory ac vi es and to allow students to exercise with the major computer programming paradigms and the best programming prac ces. The major robo cs topics students are involved with are: acquiring data from sensors, connec ng sensors to the robot, and navigate the robot to reach the final goal. The posi ve effects given by this approach are highlighted in this paper by comparing the work recently produced by students with the work produced in the previous years in which ROS was not yet adopted and many different so ware tools and languages were used. The results of a ques onnaire are reported showing that we achieved the didac cal objecves we expected as instructors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.