Abstract-Cameras are often a good choice as the primary outward-looking sensor for mobile robots, and a wide field of view is usually desirable for responsive and accurate navigation, SLAM and relocalisation. While this can potentially be provided by a single omnidirectional camera, it can also be flexibly achieved by multiple cameras with standard optics mounted around the robot. However, such setups are difficult to calibrate.Here we present a general method for fully automatic extrinsic auto-calibration of a fixed multi camera rig, with no requirement for calibration patterns or other infrastructure, which works even in the case where the cameras have completely non-overlapping views. The robot is placed in a natural environment and makes a set of programmed movements including a full horizontal rotation and captures a synchronized image sequence from each camera. These sequences are processed individually with a monocular visual SLAM algorithm. The resulting maps are matched and fused robustly based on corresponding invariant features, and then all estimates are optimised full joint bundle adjustment, where we constrain the relative poses of the cameras to be fixed. We present results showing accurate performance of the method for various two and four camera configurations.
This paper describes a robotics architecture, the ViRbot, used to control the operation of service mobile robots. It accomplish the required commands using AI actions planning and reactive behaviors with a description of the working environment. In the ViRbot architecture the actions planner module uses Conceptual Dependency (CD) primitives as the base for representing the problem domain. After a command is spoken to the mobile robot a CD representation of it is generated, a rule based system takes this CD representation, and using the state of the environment generates other subtasks represented by CDs to accomplish the command. By using a good representation of the problem domain through CDs and a rule based system as an inference engine, the operation of the robot becomes a more tractable problem and easier to implement. The ViRbot system was tested in the Robocup@Home [1] category in the Robocup competition at Bremen, Germany in 2006 and in Atlanta in 2007, where our robot TPR8, obtained the third place in this category.
Abstract. This paper presents a robust implementation of an object tracker able to tolerate partial occlusions, rotation and scale for a variety of different objects. The objects are represented by collections of interest points which are described in a multi-resolution framework, giving a representation of those points at different scales. Inspired by [1], a stack of descriptors is built only the first time that the interest points are detected and extracted from the region of interest. This provides efficiency of representation and results in faster tracking due to the fact that it can be done off-line. An Unscented Kalman Filter (UKF) using a constant velocity model estimates the position and the scale of the object, with the uncertainty in the position and the scale obtained by the UKF, the search of the object can be constrained only in a specific region in both the image and in scale.The use of this approach shows an improvement in real-time tracking and in the ability to recover from full occlusions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.