Abstract-In human-robot collaboration the robot's behavior impacts the worker's safety, comfort and acceptance of the robotic system. In this paper we address the problem of how to improve the worker's posture during human-robot collaboration. Using postural assessment techniques, and a personalized human kinematic model, we optimize the model body posture to fulfill a task while avoiding uncomfortable or unsafe postures. We then derive a robotic behavior that leads the worker towards that improved posture. We validate our approach in an experiment involving a joint task with 39 human subjects and a Baxter torso-humanoid robot.
Manipulators based on soft robotic technologies exhibit compliance and dexterity which ensures safe human-robot interaction. This article is a novel attempt at exploiting these desirable properties to develop a manipulator for an assistive application, in particular, a shower arm to assist the elderly in the bathing task. The overall vision for the soft manipulator is to concatenate three modules in a serial manner such that (i) the proximal segment is made up of cable-based actuation to compensate for gravitational effects and (ii) the central and distal segments are made up of hybrid actuation to autonomously reach delicate body parts to perform the main tasks related to bathing. The role of the latter modules is crucial to the application of the system in the bathing task; however, it is a nontrivial challenge to develop a robust and controllable hybrid actuated system with advanced manipulation capabilities and hence, the focus of this article. We first introduce our design and experimentally characterize its functionalities, which include elongation, shortening, omnidirectional bending. Next, we propose a control concept capable of solving the inverse kinetics problem using multiagent reinforcement learning to exploit these functionalities despite high dimensionality and redundancy. We demonstrate the effectiveness of the design and control of this module by demonstrating an open-loop task space control where it successfully moves through an asymmetric 3-D trajectory sampled at 12 points with an average reaching accuracy of 0.79 cm + 0.18 cm. Our quantitative experimental results present a promising step toward the development of the soft manipulator eventually contributing to the advancement of soft robotics.
This paper presents a novel approach for robot instruction for assembly tasks. We consider that robot programming can be made more efficient, precise and intuitive if we leverage the advantages of complementary approaches such as learning from demonstration, learning from feedback and knowledge transfer. Starting from low-level demonstrations of assembly tasks, the system is able to extract a high-level relational plan of the task. A graphical user interface (GUI) allows then the user to iteratively correct the acquired knowledge by refining high-level plans, and low-level geometrical knowledge of the task. This combination leads to a faster programming phase, more precise than just demonstrations, and more intuitive than just through a GUI. A final process allows to reuse high-level task knowledge for similar tasks in a transfer learning fashion. Finally we present a user study illustrating the advantages of this approach.
In human-robot collaboration, multi-agent domains, or single-robot manipulation with multiple end-effectors, the activities of the involved parties are naturally concurrent. Such domains are also naturally relational as they involve objects, multiple agents, and models should generalize over objects and agents. We propose a novel formalization of relational concurrent activity processes that allows us to transfer methods from standard relational MDPs, such as Monte-Carlo planning and learning from demonstration, to concurrent cooperation domains. We formally compare the formulation to previous propositional models of concurrent decision making and demonstrate planning and learning from demonstration methods on a real-world human-robot assembly task.
Abstract-Learning object models in the wild from natural human interactions is an essential ability for robots to perform general tasks. In this paper we present a robocentric multimodal dataset addressing this key challenge. Our dataset focuses on interactions where the user teaches new objects to the robot in various ways. It contains synchronized recordings of visual (3 cameras) and audio data which provide a challenging evaluation framework for different tasks.Additionally, we present an end-to-end system that learns object models using object patches extracted from the recorded natural interactions. Our proposed pipeline follows these steps: (a) recognizing the interaction type, (b) detecting the object that the interaction is focusing on, and (c) learning the models from the extracted data. Our main contribution lies in the steps towards identifying the target object patches of the images. We demonstrate the advantages of combining language and visual features for the interaction recognition and use multiple views to improve the object modelling.Our experimental results show that our dataset is challenging due to occlusions and domain change with respect to typical object learning frameworks. The performance of common outof-the-box classifiers trained on our data is low. We demonstrate that our algorithm outperforms such baselines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.