Ifvirtual reality systems are to make good on their name, designers must know how people perceive space in natural environments, in photographs, and in cinema. Perceivers understand the layout of a cluttered natural environment through the use of nine or more sources of information, each based on different assumptions--occlusion, height in the visual field, relative size, relative density, aerial perspective, binocular disparities, accommodation, convergence, and motion perspective. The relative utility of these sources at different distances is compared, using their ordinal depth-threshold functions. From these, three classes of space around a moving observer are postulated: personal space, action space, and vista space. Within each, a smaller number of sources act in consort, with different relative strengths. Given the general ordinality of the sources, these spaces are likely to be affine in character, stretching and collapsing with viewing conditions. One of these conditions is controlled by lens length in photography and cinematography or by field-of-view commands in computer graphics. These have striking effects on many of these sources of information and, consequently, on how the layout of a scene is perceived.