Abstract:As robot make their way out of factories into human environments, outer space, and beyond, they require the skill to manipulate their environment in multifarious, unforeseeable circumstances. With this regard, pushing is an essential motion primitive that dramatically extends a robot's manipulation repertoire. In this work, we review the robotic pushing literature. While focusing on work concerned with predicting the motion of pushed objects, we also cover relevant applications of pushing for planning and cont… Show more
“…However, despite being performed in simulations, due to three sources of noise that were added on long trajectories and the reasoning described above, this proof of principle approach should scale to real-world scenarios. Moreover, further investigation is required to extend this framework to tasks beyond grasping, such as peg-in-the-hole problems when the interactive forces arisen by the physical interaction become critical ( 27 ).…”
Designing robotic assistance devices for manipulation tasks is challenging. This work aims at improving accuracy and usability of physical human-robot interaction (pHRI) where a user interacts with a physical robotic device (e.g., a human operated manipulator or exoskeleton) by transmitting signals which need to be interpreted by the machine. Typically these signals are used as an open-loop control, but this approach has several limitations such as low take-up and high cognitive burden for the user. In contrast, a control framework is proposed that can respond robustly and efficiently to intentions of a user by reacting proactively to their commands. The key insight is to include context- and user-awareness in the controller, improving decision making on how to assist the user. Context-awareness is achieved by creating a set of candidate grasp targets and reach-to grasp trajectories in a cluttered scene. User-awareness is implemented as a linear time-variant feedback controller (TV-LQR) over the generated trajectories to facilitate the motion towards the most likely intention of a user. The system also dynamically recovers from incorrect predictions. Experimental results in a virtual environment of two degrees of freedom control show the capability of this approach to outperform manual control. By robustly predicting the user’s intention, the proposed controller allows the subject to achieve superhuman performance in terms of accuracy and thereby usability.
“…However, despite being performed in simulations, due to three sources of noise that were added on long trajectories and the reasoning described above, this proof of principle approach should scale to real-world scenarios. Moreover, further investigation is required to extend this framework to tasks beyond grasping, such as peg-in-the-hole problems when the interactive forces arisen by the physical interaction become critical ( 27 ).…”
Designing robotic assistance devices for manipulation tasks is challenging. This work aims at improving accuracy and usability of physical human-robot interaction (pHRI) where a user interacts with a physical robotic device (e.g., a human operated manipulator or exoskeleton) by transmitting signals which need to be interpreted by the machine. Typically these signals are used as an open-loop control, but this approach has several limitations such as low take-up and high cognitive burden for the user. In contrast, a control framework is proposed that can respond robustly and efficiently to intentions of a user by reacting proactively to their commands. The key insight is to include context- and user-awareness in the controller, improving decision making on how to assist the user. Context-awareness is achieved by creating a set of candidate grasp targets and reach-to grasp trajectories in a cluttered scene. User-awareness is implemented as a linear time-variant feedback controller (TV-LQR) over the generated trajectories to facilitate the motion towards the most likely intention of a user. The system also dynamically recovers from incorrect predictions. Experimental results in a virtual environment of two degrees of freedom control show the capability of this approach to outperform manual control. By robustly predicting the user’s intention, the proposed controller allows the subject to achieve superhuman performance in terms of accuracy and thereby usability.
“…The estimation of dynamic parameters of a manipulated object by an autonomous mobile robot has received some attention in the past (Stüber et al, 2020). Most of the existing approaches either require an extensive training dataset (Fitzpatrick et al, 2003) or use kinematics-based methods for a specific task (Vithani and Gupta, 2002).…”
Section: Object Model Identification In Motion Planningmentioning
Assistive robots designed for physical interaction with objects will play an important role in assisting with mobility and fall prevention in healthcare facilities. Autonomous mobile manipulation presents a hurdle prior to safely using robots in real-life applications. In this article, we introduce a mobile manipulation framework based on model predictive control using learned dynamics models of objects. We focus on the specific problem of manipulating legged objects such as those commonly found in healthcare environments and personal dwellings (e.g., walkers, tables, chairs). We describe a probabilistic method for autonomous learning of an approximate dynamics model for these objects. In this method, we learn dynamic parameters using a small dataset consisting of force and motion data from interactions between the robot and object. Moreover, we account for multiple manipulation strategies by formulating manipulation planning as a mixed-integer convex optimization. The proposed framework considers the hybrid control system composed of (i) choosing which leg to grasp and (ii) control of continuous applied forces for manipulation. We formalize our algorithm based on model predictive control to compensate for modeling errors and find an optimal path to manipulate the object from one configuration to another. We present results for several objects with various wheel configurations. Simulation and physical experiments show that the obtained dynamics models are sufficiently accurate for safe and collision-free manipulation. When combined with the proposed manipulation planning algorithm, the robot successfully moves the object to the desired pose while avoiding any collision.
“…Although this method shows effective generalization to novel objects, it is constrained in terms of complex tasks and the time scales at which these tasks should be executed. In addition, this method cannot be applied to long-term planning and is only effective for short motions [110], [111]. To overcome this limitation, the authors updated their work in another paper by using a learning-based approach of hand-eye coordination for robotic grasping from monocular images [112].…”
Section: B Suction and Multifunctional Graspingmentioning
The motivation behind our work is to review and analyze the most relevant studies on deep reinforcement learning-based object manipulation. Various studies are examined through a survey of existing literature and investigation of various aspects, namely, the intended applications, techniques applied, challenges faced by researchers and recommendations for minimizing obstacles. This review refers to all relevant articles on deep reinforcement learning-based object manipulation and solutions. The object grasping issue is a major manipulation challenge. Object grasping requires detection systems, methods and tools to facilitate efficient and fast agent training. Several studies have proposed that object grasping and its subtypes are the main elements in dealing with the environment and agent. Unlike other review articles, this review article provides different observations on deep reinforcement learning-based manipulation. The results of this comprehensive review of deep reinforcement learning in the manipulation field may be valuable for researchers and practitioners because they can expedite the establishment of important guidelines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.