“…When this expectation view is reconciled with the camera perception, the result is a more precise fix on the location of the robot. Examples of such systems are [100] by Matthies and Shafer, where stereo vision was used for error reduction; a system by Christensen et al [24], where stereo vision was used in conjunction with a CAD model representation of the space; the PSEIKI system described in [1], [69] which used evidential reasoning for image interpretation; the system presented by Tsubouchi and Yuta [147] that used color images and CAD models; the FINALE system of Kosaka and Kak [75], and Kosaka et al [76] that used a geometric model and prediction of uncertainties in the Hough space and its extended version [116], [117] which incorporated vision-based obstacle avoidance for stationary objects; the system of Kriegman et al [79] that used stereo vision for both navigation and map-building; the NEURO-NAV system of Meng and Kak [102], [103] that used a topological representation of space and neural networks to extract features and detect landmarks; the FUZZY-NAV system of Pan et al [121] that extended NEURO-NAV by incorporating fuzzy logic in a high-level rule-based controller for controlling navigation behavior of the robot; the system of [165], in which landmarks were exit signs, air intakes and loudspeakers on the ceiling and that used a template matching approach to recognize the landmarks; the system of Horn and Schmidt [53], [54] that describes the localization system of the mobile robot MACROBE-Mobile and Autonomous Computer-Controlled Robot Experiment-using a 3D-laser-range-camera, etc.…”