We describe a process in which the segmentation of objects as well as the extraction of the object shape becomes realized through active exploration of a robot vision system. In the exploration process, two behavioral modules that link robot actions to the visual and haptic perception of objects interact. First, by making use of an object independent grasping mechanism, physical control over potential objects can be gained. Having evaluated the initial grasping mechanism as being successful, a second behavior extracts the object shape by making use of prediction based on the motion induced by the robot. This also leads to the concept of an "object" as a set of features that change predictably over different frames. The system is equipped with a certain degree of generic prior knowledge about the world in terms of a sophisticated visual feature extraction process in an early cognitive vision system, knowledge about its own embodiment as well as knowledge about geometric relationships such as rigid body motion. This prior knowledge allows the extraction of representations that are semantically richer compared to many other approaches.
Skeletal trees are commonly used in order to express geometric properties of the shape. Accordingly, tree edit distance is used to compute a dissimilarity between two given shapes. We present a new tree edit based shape matching method which uses a recent coarse skeleton representation. The coarse skeleton representation allows us to represent both shapes and shape categories in the form of depth-1 trees.Consequently, we can easily integrate the influence of the categories into shape dissimilarity measurements. The new dissimilarity measure gives a better within group versus between group separation, and it mimics the asymmetric nature of human similarity judgements.
Abstract-This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space -3D position and orientation -, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.
Abstract-We describe a bootstrapping cognitive robot system that-mainly based on pure exploration-acquires rich object representations and associated object-specific grasp affordances. Such bootstrapping becomes possible by combining innate competences and behaviours by which the system gradually enriches its internal representations, and thereby develops an increasingly mature interpretation of the world and its ability to act within it. We compare the system's prior competences and developmental progress with human innate competences and developmental stages of infants.Index Terms-robots with development and learning skills, active exploration of environment, hardware platform for development, using robots to study development and learning
This letter introduces a new approach for the automated detection of circular oil tanks from single panchromatic satellite images. The new approach considers the symmetric nature of the circular oil depots, and it computes the radial symmetry in a unique way. We propose an automated thresholding method to focus on circular regions and a new measure, circle support ratio, to verify detected circles. Experiments are performed on GeoEye-1 test scenes, and the results reveal that the new approach is capable of detecting oil tanks with high success. The performance of our approach is also compared with leading techniques from the literature and has provided comparable or superior results.Index Terms-Circle detection, oil tanks, panchromatic satellite imagery, radial symmetry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.