In this work, we propose to reconstruct a complete 3-D model of an unknown object by fusion of visual and tactile information while the object is grasped. Assuming the object is symmetric, a first hypothesis of its complete 3-D shape is generated from a single view. This initial model is used to plan a grasp on the object which is then executed with a robotic manipulator equipped with tactile sensors. Given the detected contacts between the fingers and the object, the full object model including the symmetry parameters can be refined. This refined model will then allow the planning of more complex manipulation tasks.The main contribution of this work is an optimal estimation approach for the fusion of visual and tactile data applying the constraint of object symmetry. The fusion is formulated as a state estimation problem and solved with an iterative extended Kalman filter. The approach is validated experimentally using both artificial and real data from two different robotic platforms.
In this work, we propose to reconstruct a complete three-dimensional (3-D) model of an unknown object by fusion of visual and tactile information while the object is grasped. Assuming the object is symmetric, a first hypothesis of its complete 3-D shape is generated. A grasp is executed on the object with a robotic manipulator equipped with tactile sensors. Given the detected contacts between the fingers and the object, the initial full object model including the symmetry parameters can be refined. This refined model will then allow the planning of more complex manipulation tasks. The main contribution of this work is an optimal estimation approach for the fusion of visual and tactile data applying the constraint of object symmetry. The fusion is formulated as a state estimation problem and solved with an iterated extended Kalman filter. The approach is validated experimentally using both artificial and real data from two different robotic platforms.
Several novel and particularly successful object and object category detection and recognition methods based on image features, local descriptions of object appearance, have recently been proposed. The methods are based on a localization of image features and a spatial constellation search over the localized features. The accuracy and reliability of the methods depend on the success of both tasks: image feature localization and spatial constellation model search. In this paper, we present an improved algorithm for image feature localization. The method is based on complex-valued multi resolution Gabor features and their ranking using multiple hypothesis testing. The algorithm provides very accurate local image features over arbitrary scale and rotation. We discuss in detail issues such as selection of filter parameters, confidence measure, and the magnitude versus complex representation, and show on a large test sample how these influence the performance. The versatility and accuracy of the method is demonstrated on two profoundly different challenging problems (faces and license plates).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.