We report on our experiences regarding the acquisition of hybrid Semantic 3D Object Maps for indoor household environments, in particular kitchens, out of sensed 3D point cloud data. Our proposed approach includes a processing pipeline, including geometric mapping and learning, for processing large input datasets and for extracting relevant objects useful for a personal robotic assistant to perform complex manipulation tasks. The type of objects modeled are objects which perform utilitarian functions in the environment such as kitchen appliances, cupboards, tables, and drawers. The resulted model is accurate enough to use it in physics-based simulations, where doors of 3D containers can be opened based on their hinge position. The resulted map is represented as a hybrid concept and is comprised of both the hierarchically classified objects and triangular meshes used for collision avoidance in manipulation routines.
Abstract-In this paper we present a comprehensive object categorization and classification system, of great importance for mobile manipulation applications in indoor environments. In detail, we tackle the problem of recognizing everyday objects that are useful for a personal robotic assistant in fulfilling its tasks, using a hierarchical multi-modal 3D-2D processing and classification system. The acquired 3D data is used to estimate geometric labels (plane, cylinder, edge, rim, sphere) at each voxel cell using the Radius-based Surface Descriptor (RSD). Then, we propose the use of a Global RSD feature (GRSD) to categorize point clusters that are geometrically identical into one of the object categories. Once a geometric category and a 3D position is obtained for each object cluster, we extract the region of interest in the camera image and compute a SURF-based feature vector for it. Thus we obtain the exact object instance and the orientation around the object's up-right axis from the appearance. The resultant system provides a hierarchical categorization of objects into basic classes from their geometry and identifies objects and their poses based on their appearance, with near real-time performance. We validate our approach on an extensive database of objects that we acquired using real sensing devices, and on both unseen views and unseen objects.
Abstract-In this paper we present a comprehensive perception system with applications to mobile manipulation and grasping for personal robotics. Our approach makes use of dense 3D point cloud data acquired using stereo vision cameras by projecting textured light onto the scene. To create models suitable for grasping, we extract the supporting planes and model object clusters with different surface geometric primitives. The resultant decoupled primitive point clusters are then reconstructed as smooth triangular mesh surfaces, and their use is validated in grasping experiments using OpenRAVE [1]. To annotate the point cloud data with primitive geometric labels we make use of our previously proposed Fast Point Feature Histograms [2] and probabilistic graphical methods (Conditional Random Fields), and obtain a classification accuracy of 98.27% for different object geometries. We show the validity of our approach by analyzing the proposed system for the problem of building object models usable in grasping applications with the PR2 robot (see Figure 1).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.