Do locomotor aftereffects depend specifically on visual feedback? In 7 experiments, 116 college students were tested, with closed eyes, at stationary running or at walking to a previewed target after adaptation, with closed eyes, to treadmill locomotion. Subjects showed faster inadvertent drift during stationary running and increased distance (overshoot) when walking to a target. Overshoot seemed to saturate (i.e., reach a ceiling) at 17% after as little as 1 min of adaptation. Sidestepping at test reduced overshoot, suggesting motor specificity. But inadvertent drift effects were decreased if the eyes were open and the treadmill was drawn through the environment during adaptation, indicating that these effects involve self-motion perception. Differences in expression of inadvertent drift and of overshoot after adaptation to treadmill locomotion may have been due to different sets of ancillary cues available for the 2 tasks. Self-motion perception is multimodal.
VR lends itself to the study of intersensory calibration in self-motion perception. However, proper calibration of visual and locomotor self-motion in VR is made complicated by the compression of perceived distance and by unfamiliar modes of locomotion. Although adaptation is fairly rapid with exposure to novel sensorimotor correlations, here it is shown that good initial calibration is found when both (1) the virtual environment is richly structured in near space and (2) locomotion is on solid ground. Previously it had been observed that correct visual speeds seem too slow when walking on a treadmill. Several principles may be involved, including inhibitory sensory prediction, distance compression, and missing peripheral flow in the reduced FOV. However, though a richly-structured near-space environment provides higher rates of peripheral flow, its presence does not improve calibration when walking on a treadmill. Conversely, walking on solid ground still shows relatively poor calibration in an empty (though welltextured) virtual hallway. Because walking on solid ground incorporates well-calibrated mechanisms that can assess speed of self-motion independent of vision, these observations suggest that near space may have been better calibrated in the HMD. Near-space obstacle avoidance systems may also be involved. Order effects in the data from the treadmill experiment indicate that recalibration of self-motion perception occurred during the experiment.
Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., man with violin); in a memory task, recognition was tested after each sequence. Target detection was much better than recognition memory, but in both tasks the more pictures on the frame, the lower the performance. When the presentation duration was set at 160 ms with a variable interframe interval such that the total times were the same as in the initial experiments, the results were similar. The results suggest that visual processing occurs in 2 stages: fast, global processing of all pictures in Stage 1 (usually sufficient for detection) and slower, serial processing in Stage 2 (usually necessary for subsequent memory).Keywords picture perception; picture memory; target detection; RSVP; search As people look around their normal environment, they take in the scene in a series of fixations lasting about 250 ms. Just how much information can be extracted from each fixation, and how well can it be remembered later? Recent studies have suggested not only that a scene can be understood within such a glimpse, but also that a target can be detected among as many as four simultaneous scenes presented briefly, at little or no additional cost (Rousselet, Thorpe, & Fabre-Thorpe, 2004b). In the present study we investigate this claim using two tasks, detection and later memory.The ability to detect a target almost as well among several items as when only one item is presented suggests some capacity for processing multiple items in parallel. Indeed, studies of the monkey visual system using single-cell recordings show that cortical neurons that are selective for particular objects can "recognize" multiple objects in parallel at levels as high as the inferior temporal cortex. When the scene is cluttered, this initial parallel process is followed within 150 ms by competitive inhibition of all but the one relevant object in a given receptive field (e.g., Chelazzi, Duncan, Miller, & Desimone, 1998; see Rousselet, Thorpe, & FabreThorpe, 2004a, for a review). The large and overlapping receptive fields found in the inferior temporal cortex would allow for detection of a target among several nontargets in parallel, followed by competitive suppression of nontargets. If a similar processing sequence occurs in human vision, that could account for our capacity to detect a target among multiple pictures rapidly with little interference from nontarget pictures. The subsequent zeroing in on a single item for continued processing is consistent with evidence for serial processing of individual items when the task requires it. As Rousselet et al. (2004a) said, "Constraints considerably limit the amount of information that can be processed and explicitly accessed at once, so that serial selection of objects is often necessary...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.