To effectively use a virtual environment (VE) for applications such as training and design evaluation, a good sense of orientation is needed in the VE. “Natural” human geographical orientation, when moving around in the world, relies on visual as well as proprioceptive feedback. However, the present navigation metaphors that are used to move around in the VE often lack proprioceptive feedback. To investigate the possible consequences this may have, an experiment was conducted on the relative contributions of visual and proprioceptive feedback on path integration in VE. Subjects were immersed in a virtual forest and were asked to turn specific angles under different combinations of visual, vestibular, and kinesthetic feedback (pure visual, visual plus vestibular, visual plus vestibular plus kinesthetic, pure vestibular, and vestibular plus kinesthetic). Furthermore, two visual conditions with different visual flows were tested: normal visual flow and decreased visual flow provided by a 60% zoom. Results show that kinesthetic feedback provides the most reliable and accurate source of information to use for path integration, indicating the benefits of incorporating this kind of feedback in navigation metaphors. Orientation based on visual flow alone is most inaccurate and unreliable. In all conditions, subjects overestimated their turning speed and subsequently didn't turn far enough. Both the absolute errors and the variation in path integration increase with the length of the path.
We examine apparent motion carried by textural properties. The texture stimuli consist of a sequence of grating patches of various spatial frequencies and amplitudes. Phases are randomized between frames to insure that first-order motion mechanisms direct@ applied to stimulus luminance are not systematically engaged. We use ambiguous apparent motion displays in which a heterogeneous motion path defined by alternating patches of texture s (standard) and texture v (variable) competes with a homogeneous motion path defined solely by patches of texture s. Our results support a one-dimensional (single-channel) model of motion-from-texture in which motion strength is computed from a single spatial transformation of the stimulus--an ucfiuity transformation. The value assigned to a point in space-time by this activity transformation is directly proportional to the modulation amplitude of the local texture and inversely proportional to local spatial frequency (within the range of spatial frequencies examined). The activity transformation is modeled as the rectified output of a low-pass spatial filter applied to stimulus contrast. Our data further suggest that the strength of texture-defined motion between a patch of texture s and a patch of texture v is proportional to the product of the activities of s and v. A strongly counterintuitive prediction of this model borne out in our data is that motion between patches of different texture can be stronger than motion between patches of similar texture (e.g. motion between patches of a low contrast, low frequency texture 1 and patches of high contrast, high frequency texture h can be stronger than motion between patches of similar texture h). Second-order motion Motion metamers Motion energy Motion correspondence INTRODUCTION First -order motion extractionDrifting spatiotemporal modulations of various sorts of optical stuff (such as luminance, contrast, texture, binocular disparity, etc.) can induce vivid motion percepts; in each case "something" appears to move from one place to another. This introspective description, however, does not necessarily reflect the underlying processes in human visual motion processing.The study of visual motion extraction mechanisms has traditionally focused on rigidly moving objects, projecting drifting modulations of luminance. Several physiologically plausible computational models have been proposed to extract motion information from drifting luminance modulations. Examples are the gradient detector (see Moulden & Begg, 1986) and the Reichardt or correlator detector (see Reichardt, 1961). These detectors are designed to detect drifting luminance modulations (or their linear transformations) and are therefore called Jirst-order motion extraction mechanisms (Cavanagh & Mather, 1989) Psychophysical experiments (e.g. van Santen & Sperling, 1984; Werkhoven, Snippe dz Koenderink, 1990b) have shown that motion perception of drifting modulations of luminance is well explained by a first-order computation called motion energy extraction. Indeed, most...
In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants were exposed to visual, inertial, or visual-inertial motion conditions in a moving base simulator, capable of accelerating along a horizontal linear track with variable heading. Visual random-dot motion stimuli were projected on a display with a 40° horizontal × 32° vertical field of view (FoV). All motion profiles consisted of a raised cosine bell in velocity. Stimulus heading was varied between 0 and 20°. After each stimulus, participants indicated whether perceived self-motion was straight-ahead or not. We fitted cumulative normal distribution functions to the data as a psychometric model and compared this model to a nested model in which the slope of the multisensory condition was subject to the MLI hypothesis. Based on likelihood ratio tests, the MLI model had to be rejected. It seems that the imprecise inertial estimate was weighed relatively more than the precise visual estimate, compared to the MLI predictions. Possibly, this can be attributed to low realism of the visual stimulus. The present results concur with other findings of overweighing of inertial cues in synthetic environments.
An experiment was conducted to examine how communication patterns and task performance differ as a function of the group's communication environment and how these processes change over time. In a longitudinal design, three-person groups had to select and argue the correct answer out of a set of three alternatives for ten questions. Compared with face-to-face groups, video-teleconferencing groups took fewer turns, required more time for turns, and interrupted each other less. Listeners appeared to be more polite, waiting for a speaker to finish before making their conversational contribution. Although groups were able to maintain comparable performance scores across communication conditions, initial differences between conditions in communication patterns disappeared over time, indicating that the video-teleconferencing groups adapted to the newness and limitations of their communication environment. Moreover, because of increased experience with the task and the group, groups in both conditions needed less conversation to complete the task at later rounds. Implications are discussed for practice, training, and possibilities for future research.
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants' task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants' task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain-computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human-computer interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.