This textbook for advanced undergraduates and graduate students emphasizes algorithms for a range of strategies for locomotion, sensing, and reasoning. It concentrates on wheeled and legged mobile robots but discusses a variety of other propulsion systems. This edition includes advances in robotics and intelligent machines over the ten years prior to publication, including significant coverage of SLAM (simultaneous localization and mapping) and multi-robot systems. It includes additional mathematical background and an extensive list of sample problems. Various mathematical techniques that were assumed in the first edition are now briefly introduced in appendices at the end of the text to make the book more self-contained. Researchers as well as students in the field of mobile robotics will appreciate this comprehensive treatment of state-of-the-art methods and key technologies.
The direction of 'up' has traditionally been measured by setting a line (luminous if necessary) to the apparent vertical, a direction known as the 'subjective visual vertical' (SVV); however for optimum performance in visual skills including reading and facial recognition, an object must to be seen the 'right way up'--a separate direction which we have called the 'perceptual upright' (PU). In order to measure the PU, we exploited the fact that some symbols rely upon their orientation for recognition. Observers indicated whether the symbol 'horizontal P' presented in various orientations was identified as either the letter 'p' or the letter 'd'. The average of the transitions between 'p-to-d' and 'd-to-p' interpretations was taken as the PU. We have labelled this new experimental technique the Oriented CHAracter Recognition Test (OCHART). The SVV was measured by estimating whether a line was rotated clockwise or counter-clockwise relative to gravity. We measured the PU and SVV while manipulating the orientation of the visual background in different observer postures: upright, right side down and (for the PU) supine. When the body, gravity and the visual background were aligned, the SVV and the PU were similar, but as the background orientation and observer posture orientations diverged, the two measures varied markedly. The SVV was closely aligned with the direction of gravity whereas the PU was closely aligned with the body axis. Both probes showed influences of all three cues (body orientation, vision and gravity) and these influences could be predicted from a weighted vectorial sum of the directions indicated by these cues. For the SVV, the ratio was 0.2:0.1:1.0 for the body, visual and gravity cues, respectively. For the PU, the ratio was 2.6:1.2:1.0. In the case of the PU, these same weighting values were also predicted by a measure of the reliability of each cue; however, reliability did not predict the weightings for the SVV. This is the first time that maximum likelihood estimation has been demonstrated in combining information between different reference frames. The OCHART technique provides a new, simple and readily applicable method for investigating the PU which complements the SVV. Our findings suggest that OCHART is particularly suitable for investigating the functioning of visual and non-visual systems and their contributions to the perceived upright of novel environments such as high- and low-g environments, and in patient and ageing populations, as well as for normal observers.
No abstract
Surprisingly little is known of the perceptual consequences of visual or vestibular stimulation in updating our perceived position in space as we move around. We assessed the roles of visual and vestibular cues in determining the perceived distance of passive, linear self motion. Subjects were given cues to constant-acceleration motion: either optic flow presented in a virtual reality display, physical motion in the dark or combinations of visual and physical motions. Subjects indicated when they perceived they had traversed a distance that had been previously given to them either visually or physically. The perceived distance of motion evoked by optic flow was accurate relative to a previously presented visual target but was perceptually equivalent to about half the physical motion. The perceived distance of physical motion in the dark was accurate relative to a previously presented physical motion but was perceptually equivalent to a much longer visually presented distance. The perceived distance of self motion when both visual and physical cues were present was more closely perceptually equivalent to the physical motion experienced rather than the simultaneous visual motion, even when the target was presented visually. We discuss this dominance of the physical cues in determining the perceived distance of self motion in terms of capture by non-visual cues. These findings are related to emerging studies that show the importance of vestibular input to neural mechanisms that process self motion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.