“…In addition, mobile manipulator's base takes up significant space within work cells, with Figure 2 displaying some realworld factory examples of this. Sometimes, a user must teach the robot with a highly awkward body posture due to anthropometric limitations [7]; moreover, the resistance of robots in drag mode makes the user feel difficult to move delicately [7], these two issues make the hand-guiding for accurate assembly rather difficult and low quality (e.g., excessive contact force and low accuracy). Robots equipped with joint torque sensors partially solved the second issue of resistance, while the UR5e used in our experiment is widely used in the industry, and many users would still be affected.…”
Section: Problem Statement and Methods Overviewmentioning
confidence: 99%
“…Since mobile manipulators can only be placed beside the production lines instead of installing on the production line like collaborative robots, which occupying more space previously provided for human workers. However, programming a robot in a constrained space is very difficult [7]. Overall, ease-of-programming has been identified as an open challenge in robot assembly [2], [8].…”
Section: Reaching Exploration Insertionmentioning
confidence: 99%
“…The object was then moved from the DVSP to the desired final pose (DFP) by the user, and the robot EE will follow the trajectory q k from the DVSP to the DFP under the constraints, the trajectory could be recorded. At DFP, the camera takes the second reference photo (RF2) automatically, and the DFP could be calculate easily at the end of the trajectory by Equation (7):…”
Collaborative robots are expected to be able to work alongside humans and in some cases directly replace existing human workers, thus effectively responding to rapid assembly line changes. Current methods for programming contact-rich tasks, especially in heavily constrained space, tend to be fairly inefficient. Therefore, faster and more intuitive approaches to robot teaching are urgently required. This work focuses on combining visual servoing based learning from demonstration (LfD) and force-based learning by exploration (LbE), to enable fast and intuitive programming of contactrich tasks with minimal user effort required. Two learning approaches were developed and integrated into a framework, and one relying on human to robot motion mapping (the visual servoing approach) and one on force-based reinforcement learning. The developed framework implements the non-contact demonstration teaching method based on visual servoing approach and optimizes the demonstrated robot target positions according to the detected contact state. The framework has been compared with two most commonly used baseline techniques, pendant-based teaching and hand-guiding teaching. The efficiency and reliability of the framework have been validated through comparison experiments involving the teaching and execution of contact-rich tasks. The framework proposed in this paper has performed the best in terms of teaching time, execution success rate, risk of damage, and ease of use.
“…In addition, mobile manipulator's base takes up significant space within work cells, with Figure 2 displaying some realworld factory examples of this. Sometimes, a user must teach the robot with a highly awkward body posture due to anthropometric limitations [7]; moreover, the resistance of robots in drag mode makes the user feel difficult to move delicately [7], these two issues make the hand-guiding for accurate assembly rather difficult and low quality (e.g., excessive contact force and low accuracy). Robots equipped with joint torque sensors partially solved the second issue of resistance, while the UR5e used in our experiment is widely used in the industry, and many users would still be affected.…”
Section: Problem Statement and Methods Overviewmentioning
confidence: 99%
“…Since mobile manipulators can only be placed beside the production lines instead of installing on the production line like collaborative robots, which occupying more space previously provided for human workers. However, programming a robot in a constrained space is very difficult [7]. Overall, ease-of-programming has been identified as an open challenge in robot assembly [2], [8].…”
Section: Reaching Exploration Insertionmentioning
confidence: 99%
“…The object was then moved from the DVSP to the desired final pose (DFP) by the user, and the robot EE will follow the trajectory q k from the DVSP to the DFP under the constraints, the trajectory could be recorded. At DFP, the camera takes the second reference photo (RF2) automatically, and the DFP could be calculate easily at the end of the trajectory by Equation (7):…”
Collaborative robots are expected to be able to work alongside humans and in some cases directly replace existing human workers, thus effectively responding to rapid assembly line changes. Current methods for programming contact-rich tasks, especially in heavily constrained space, tend to be fairly inefficient. Therefore, faster and more intuitive approaches to robot teaching are urgently required. This work focuses on combining visual servoing based learning from demonstration (LfD) and force-based learning by exploration (LbE), to enable fast and intuitive programming of contactrich tasks with minimal user effort required. Two learning approaches were developed and integrated into a framework, and one relying on human to robot motion mapping (the visual servoing approach) and one on force-based reinforcement learning. The developed framework implements the non-contact demonstration teaching method based on visual servoing approach and optimizes the demonstrated robot target positions according to the detected contact state. The framework has been compared with two most commonly used baseline techniques, pendant-based teaching and hand-guiding teaching. The efficiency and reliability of the framework have been validated through comparison experiments involving the teaching and execution of contact-rich tasks. The framework proposed in this paper has performed the best in terms of teaching time, execution success rate, risk of damage, and ease of use.
“…Since robots and robot arms are often kinematically different from their human operators, and since, with few exceptions, there is little to no haptic feedback for operators, several methods for intuitive mapping of operator motion and intent onto robotic systems have been proposed. In [1], Lee et al introduced a method of unimanual and bimanual teleoperation of a robot arm, using Oculus Rift IR LED sensors and touch controllers They showed that their method leads to subjective and objective improvements over traditional kinesthetic teaching methods on moderately challenging tasks. Rakita et al [2] introduced a trade-off between IK and different goals such as obstacle or singularity avoidance.…”
Section: B Teleoperationmentioning
confidence: 99%
“…The field of teleoperation of robots and robot arms has seen a lot of activity since both collaborative robot arms and 6 degrees-of-freedom (DOF) input devices have become more affordable and more widely available [1]. It has been shown that it is generally more intuitive, faster and less mentally exhausting for a human operator to operate a robot arm via head and hand tracking devices rather than via a touch interface, mouse or joystick [2].…”
Teleoperation provides a way for human operators to guide robots in situations where full autonomy is challenging or where direct human intervention is required. It can also be an important tool to teach robots in order to achieve autonomous behaviour later on. The increased availability of collaborative robot arms and Virtual Reality (VR) devices, provides ample opportunity for development of novel teleoperation methods. Since robot arms are often kinematically different from human arms, mapping human motions to a robot in real-time is not trivial. Additionally, a human operator might steer the robot arm toward singularities or its workspace limits, which can lead to undesirable behaviour. This is further accentuated for the orchestration of multiple robots. In this paper, we present a VR interface targeted to multi-arm payload manipulation, which can closely match real-time input motion. Allowing the user to manipulate the payload rather than mapping their motions to individual arms we are able to simultaneously guide multiple collaborative arms. By releasing a single rotational degree of freedom, and by using a local optimization method, we can improve each arm's manipulability index, which in turn lets us avoid kinematic singularities and workspace limitations. We apply our approach to predefined trajectories as well as real-time teleoperation on different robot arms and compare performance in terms of end effector position error and relevant joint motion metrics.
Human–robot interaction (HRI) has escalated in notability in recent years, and multimodal communication and control strategies are necessitated to guarantee a secure, efficient, and intelligent HRI experience. In spite of the considerable focus on multimodal HRI, comprehensive disquisitions delineating various modalities and intricately analyzing their combinations remain elusive, consequently limiting holistic understanding and future advancements. This article aspires to bridge this inadequacy by conducting a profound exploration of multimodal HRI, predominantly concentrating on four principal modalities: vision, auditory and language, haptics, and physiological sensing. An extensive review encapsulating algorithmic dissection, interface devices, and applicative dimensions forms part of this discourse. This manuscript distinctively combines multimodal HRI with cognitive science, deeply probing into the three dimensions, perception, cognition, and action, thereby demystifying algorithms intrinsic to multimodal HRI. Finally, it accentuates the empirical challenges and contours preemptive trajectories for multimodal HRI in human‐centric smart manufacturing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.