Abstract-In this paper, we present a new hybrid visual servoing algorithm for robot arm positioning task. Hybrid methods in visual servoing partially combine the 2D and 3D visual information to improve the performance of the traditional image-based and position-based visual servoing. Our algorithm is superior to the state of the art hybrid methods. The objective function has been designed to include the full 2D and 3D information available either from the CAD model or from the partial reconstruction process by decomposing the homography matrix between two views. Here, each of 2D and 3D error functions is used to control the six degrees of freedom. We call this method 5D visual servoing. The positioning task has been formulated as a minimization problem. Gradient decent as a first order approximation and Gauss-Newton as a second order approximation are considered in this paper. Simulation results show that these two methods provide an efficient solution to the camera retreat and features visibility problems. The camera trajectory in the Cartesian space is also shown to be satisfactory.
Understanding positional semantics of the environment plays an important role in manipulating an object in clutter. The interaction with surrounding objects in the environment must be considered in order to perform the task without causing the objects fall or get damaged. In this paper, we learn the semantics in terms of support relationship among different objects in a cluttered environment by utilizing various photometric and geometric properties of the scene. To manipulate an object of interest, we use the inferred support relationship to derive a sequence in which its surrounding objects should be removed while causing minimal damage to the environment. We believe, this work can push the boundary of robotic applications in grasping, object manipulation and picking-from-bin, towards objects of generic shape and size and scenarios with physical contact and overlap. We have created an RGBD dataset that consists of various objects used in day-today life present in clutter. We explore many different settings involving different kind of object-object interaction. We successfully learn support relationships and predict support order in these settings.
Robotic manipulation of objects in clutter remains a challenging problem to date. The challenge is posed by various levels of complexity involved in interaction among objects. Understanding these semantic interactions among different objects is important to manipulate in complex settings. It can play a significant role in extending the scope of manipulation to cluttered environment involving generic objects, and both direct and indirect physical contact. In our work, we aim at learning semantic interaction among objects of generic shapes and sizes lying in clutter involving physical contact. We infer three types of support relationships: “support from below”, “support from side”, and “containment”. Subsequently, the learned semantic interaction or support relationship is used to derive a sequence or order in which the objects surrounding the object of interest should be removed without causing damage to the environment. The generated sequence is called support order. We also extend understanding of semantic interaction from single view to multiple views and predict support order in multiple views. Using multiple views addresses those cases that are not handled when using single view such as scenarios of occlusion or missing support relationships. We have created two RGBD datasets for our experiments on support order prediction in single view and multiple views respectively. The datasets contains RGB images, point clouds and depth maps of various objects used in day-to-day life present in clutter with physical contact and overlap. We captured many different cluttered settings involving different kinds of object-object interaction and successfully learned support relationship and performed Support Order Prediction in these settings
In this paper, we present a novel boosted robot vision control algorithm. This method utilizes on-line boosting to produce a strong vision-based robot control starting from two weak algorithms. These weak methods are image-based and position-based visual servoing algorithms. The notion of weak and strong algorithms have been presented in the context of robot vision control. Appropriate error functions are defined for the weak algorithms to evaluate their suitability in the task. The integrated algorithm has superior performance both in image and Cartesian spaces. Experiments validate this claim.
Abstract-This paper presents a novel visual servoing controller for a satellite mounted dual-arm space robot. The controller is designed to complete the task of servoing the robot's endeffectors to the desired pose, while regulating orientation of the base-satellite. Task redundancy approach is utilized to coordinate the servoing process and attitude of the base satellite. The visual task is defined as a primary task, while regulating attitude of the base satellite to zero is defined as a secondary task. The secondary task is formulated as an optimization problem in such a way that it does not affect the primary task, and simultaneously minimizes its cost function. A set of numerical experiments are carried out on a dual-arm space robot showing efficacy of the proposed control methodology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.