The loss of a hand can significantly affect one’s work and social life. For many patients, an artificial limb can improve their mobility and ability to manage everyday activities, as well as provide the means to remain independent. This paper provides an extensive review of available biosensing methods to implement the control system for transradial prostheses based on the measured activity in remnant muscles. Covered techniques include electromyography, magnetomyography, electrical impedance tomography, capacitance sensing, near-infrared spectroscopy, sonomyography, optical myography, force myography, phonomyography, myokinetic control, and modern approaches to cineplasty. The paper also covers combinations of these approaches, which, in many cases, achieve better accuracy while mitigating the weaknesses of individual methods. The work is focused on the practical applicability of the approaches, and analyses present challenges associated with each technique along with their relationship with proprioceptive feedback, which is an important factor for intuitive control over the prosthetic device, especially for high dexterity prosthetic hands.
In this analysis, we present results from measurements performed to determine the stability of a hand tracking system and the accuracy of the detected palm and finger’s position. Measurements were performed for the evaluation of the sensor for an application in an industrial robot-assisted assembly scenario. Human–robot interaction is a relevant topic in collaborative robotics. Intuitive and straightforward control tools for robot navigation and program flow control are essential for effective utilisation in production scenarios without unnecessary slowdowns caused by the operator. For the hand tracking and gesture-based control, it is necessary to know the sensor’s accuracy. For gesture recognition with a moving target, the sensor must provide stable tracking results. This paper evaluates the sensor’s real-world performance by measuring the localisation deviations of the hand being tracked as it moves in the workspace.
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot’s movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot’s movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot’s intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot’s trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users’ awareness about the motion plan of the robot.
There are several ubiquitous kinematic structures that are used in industrial robots, with the most prominent being a six-axis angular structure. However, researchers are experimenting with task-based mechanism synthesis that could provide higher efficiency with custom optimized manipulators. Many studies have focused on finding the most efficient optimization algorithm for task-based robot manipulators. These manipulators, however, are usually optimized from simple modular joints and links, without exploring more elaborate modules. Here, we show that link modules defined by small numbers of parameters have better performance than more complicated ones. We compare four different manipulator link types, namely basic predefined links with fixed dimensions, straight links that can be optimized for different lengths, rounded links, and links with a curvature defined by a Hermite spline. Manipulators are then built from these modules using a genetic algorithm and are optimized for three different tasks. The results demonstrate that manipulators built from simple links not only converge faster, which is expected given the fewer optimized parameters, but also converge on lower cost values.
In this work, we extend the previously proposed approach of improving mutual perception during human–robot collaboration by communicating the robot’s motion intentions and status to a human worker using hand-worn haptic feedback devices. The improvement is presented by introducing spatial tactile feedback, which provides the human worker with more intuitive information about the currently planned robot’s trajectory, given its spatial configuration. The enhanced feedback devices communicate directional information through activation of six tactors spatially organised to represent an orthogonal coordinate frame: the vibration activates on the side of the feedback device that is closest to the future path of the robot. To test the effectiveness of the improved human–machine interface, two user studies were prepared and conducted. The first study aimed to quantitatively evaluate the ease of differentiating activation of individual tactors of the notification devices. The second user study aimed to assess the overall usability of the enhanced notification mode for improving human awareness about the planned trajectory of a robot. The results of the first experiment allowed to identify the tactors for which vibration intensity was most often confused by users. The results of the second experiment showed that the enhanced notification system allowed the participants to complete the task faster and, in general, improved user awareness of the robot’s movement plan, according to both objective and subjective data. Moreover, the majority of participants (82%) favoured the improved notification system over its previous non-directional version and vision-based inspection.
A depth camera outputs an image in which each pixel depicts the distance between the camera plane and the corresponding point on the image plane. Low-cost depth cameras are becoming commonplace and given their applications in the field of machine vision, one must carefully select the right device according to the environment in which the camera will be used given the accuracy these cameras can be associated with factors such as distance from the target, luminosity of the environment, etc. This paper aims to compare three depth cameras currently available in the market, Intel RealSense D435, which uses stereo vision to compute depth at pixels, ASUS Xtion and Microsoft Kinect 2 represent Time of flight-based depth cameras. The comparison will be based on how the cameras perform at different distances from a flat surface and we will check if the colour of the surface affects the depth image quality.
This paper presents an approach to compensate for the effect of thermal expansion on the structure of an industrial robot and thus to reduce the repeatability difference of the robot in cold and warm conditions. In contrast to previous research in this area that deals with absolute accuracy, this article is focused on determining achievable repeatability. To unify and to increase the robot repeatability, the measurements with highly accurate sensors were performed under different conditions on an industrial robot ABB IRB1200, which was equipped with thermal sensors, mounted on a pre-defined position around joints. The performed measurements allowed to implement a temperature-based prediction model of the end effector positioning error. Subsequent tests have shown that the implemented model used for the error compensation proved to be highly effective. Using the methodology presented in this article, the impact of drift can be reduced by up to 89.9%. A robot upgraded with a compensation principle described in this article does not have to be warmed up as it works with the same low repeatability error in the entire range of the achievable temperatures.
This work focuses on improving a camera system for sensing a workspace in which dynamic obstacles need to be detected. The currently available state-of-the-art solution (MoveIt!) processes data in a centralized manner from cameras that have to be registered before the system starts. Our solution enables distributed data processing and dynamic change in the number of sensors at runtime. The distributed camera data processing is implemented using a dedicated control unit on which the filtering is performed by comparing the real and expected depth images. Measurements of the processing speed of all sensor data into a global voxel map were compared between the centralized system (MoveIt!) and the new distributed system as part of a performance benchmark. The distributed system is more flexible in terms of sensitivity to a number of cameras, better framerate stability and the possibility of changing the camera number on the go. The effects of voxel grid size and camera resolution were also compared during the benchmark, where the distributed system showed better results. Finally, the overhead of data transmission in the network was discussed where the distributed system is considerably more efficient. The decentralized system proves to be faster by 38.7% with one camera and 71.5% with four cameras.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.