Abstract:Teleoperation provides a promising way for human-robot collaboration in the unknown or unstructured environments to perform a cooperative task. It enables humans to complete a task at a remote side and combines both the human's intelligence and the robots' capabilities in a collaborative task. Therefore, it is necessary to conduct cross researches in terms of robotics, artificial intelligence, sensors, and mechatronics. This study covers the state-of-the-arts research in terms of perception, control, and learn… Show more
“…At first glance, the idea of physical human-robot orchestration may sound similar to concepts such as human-robot cooperation and shared control. Concerning those, the scientific interest has been exponentially growing in the last 20 years [108], especially with regards to teleoperated systems [109], and it has been shown to be a suitable substitute to human-human cooperation at times [110]. Human-robot cooperation has been investigated for the achievement of both known and unknown goals [111,112], showcasing a more human-centered approach.…”
In teleoperated Robot-Assisted Minimally-Invasive Surgery (RAMIS), a surgeon controls the movements of instruments inside the patient’s body via a pair of robotic joysticks. RAMIS has transformed many surgical disciplines, but its full potential is still to be realized. In this chapter we propose a pathway towards overcoming several bottlenecks that are related to transparency and stability of the teleoperation channels that mediate RAMIS. We describe the traditional system centered and the more recent human-centred approaches to teleoperation, and the special considerations for RAMIS as an application of teleoperation. However, the human-centered approach is still one sided view focusing on the surgeon but neglecting the learning capabilities of robotic systems. Hence, we consider a more general idea of physical human-robot orchestration with coevolution of mutual internal representations – of the human and the robot, and discuss it in comparison to human-human collaboration over teleoperated channels.
“…At first glance, the idea of physical human-robot orchestration may sound similar to concepts such as human-robot cooperation and shared control. Concerning those, the scientific interest has been exponentially growing in the last 20 years [108], especially with regards to teleoperated systems [109], and it has been shown to be a suitable substitute to human-human cooperation at times [110]. Human-robot cooperation has been investigated for the achievement of both known and unknown goals [111,112], showcasing a more human-centered approach.…”
In teleoperated Robot-Assisted Minimally-Invasive Surgery (RAMIS), a surgeon controls the movements of instruments inside the patient’s body via a pair of robotic joysticks. RAMIS has transformed many surgical disciplines, but its full potential is still to be realized. In this chapter we propose a pathway towards overcoming several bottlenecks that are related to transparency and stability of the teleoperation channels that mediate RAMIS. We describe the traditional system centered and the more recent human-centred approaches to teleoperation, and the special considerations for RAMIS as an application of teleoperation. However, the human-centered approach is still one sided view focusing on the surgeon but neglecting the learning capabilities of robotic systems. Hence, we consider a more general idea of physical human-robot orchestration with coevolution of mutual internal representations – of the human and the robot, and discuss it in comparison to human-human collaboration over teleoperated channels.
“…According to control mode, the teleoperation system can be divided into three categories: direct control, supervised control and shared control [104]. For the direct control mode, the slave robot is controlled by human operator directly without autonomous abilities.…”
Manipulation skill learning and generalisation have gained increasing attention due to the wide applications of robot manipulators and the spurt of robot learning techniques. Especially, the learning from demonstration method has been exploited widely and successfully in the robotic community, and it is regarded as a promising direction to realise the manipulation skill learning and generalisation. In addition to the learning techniques, the immersive teleoperation enables the human to operate a remote robot with an intuitive interface and achieve the telepresence. Thus, it is a promising way to transfer manipulation skills from humans to robots by combining the learning methods and teleoperation, and adapting the learned skills to different tasks in new situations. This review, therefore, aims to provide an overview of immersive teleoperation for skill learning and generalisation to deal with complex manipulation tasks. To this end, the key technologies, for example, manipulation skill learning, multimodal interfacing for teleoperation and telerobotic control, are introduced. Then, an overview is given in terms of the most important applications of immersive teleoperation platform for robot skill learning. Finally, this survey discusses the remaining open challenges and promising research topics. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
“…The objective of current work mostly focuses on how to generate a good manipulation to grasp target objects from clutter. Recently, complex robotic tasks in unknown or unstructured environments tend to be the combining of perception, control, and cognition [21] [20]. The objective of the proposed MQA task further involves a cognitive purpose, which require the robot to generate a sequence of manipulation actions to explore the environment and answer people's questions.…”
Section: B Robotic Manipulation In Cluttermentioning
In this paper, we propose a novel task, Manipulation Question Answering (MQA), where the robot performs manipulation actions to change the environment in order to answer a given question. To solve this problem, a framework consisting of a QA module and a manipulation module is proposed. For the QA module, we adopt the method for the Visual Question Answering (VQA) task. For the manipulation module, a Deep Q Network (DQN) model is designed to generate manipulation actions for the robot to interact with the environment. We consider the situation where the robot continuously manipulating objects inside a bin until the answer to the question is found. Besides, a novel dataset that contains a variety of object models, scenarios and corresponding question-answer pairs is established in a simulation environment. Extensive experiments have been conducted to validate the effectiveness of the proposed framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.