This paper presents the novel paradigm of a global localization method
motivated by human visual systems (HVSs). HVSs actively use the information
of the object recognition results for self-position localization and for
viewing direction. The proposed localization paradigm consisted of three parts: panoramic image acquisition, multiple object recognition, and grid-based
localization. Multiple object recognition information from panoramic
images is utilized in the localization part. High-level object information
was useful not only for global localization, but also for robot-object interactions.
The metric global localization (position, viewing direction) was
conducted based on the bearing information of recognized objects from just
one panoramic image. The feasibility of the novel localization paradigm
was validated experimentally.