An image presented on an autostereoscopic system should not contain discontinuities between adjacent views. A viewer should experience a continuous scene when moving from one view to the next. If corresponding points in two perspectives do not spatially abut, a viewer will experience jumps in the scene. This is known as interperspective aliasing. Interperspective aliasing is caused by object features far away from the stereoscopic screen being too small, which results in visual artifacts. By modeling a 3D point as a defocused image point, we can adapt Fourier analysis to devise a depth-dependent filter kernel that allows filtering of a stereoscopic 3D image. For synthetic 3D data, we use a simpler approach, which is to smear the data by a distance proportional to its depth.
We report light collimation from a point source without the space normally needed for fan-out. Rays emerge uniformly from all parts of the surface of a blunt wedge light-guide when a point source of light is placed at the thin end and the source's position determines ray direction in the manner of a lens. A lenticular array between this light-guide and a liquid crystal panel guides light from color light-emitting diodes to designated sub-pixels thereby removing the need for color filters and halving power consumption but we foresee much greater power economies and wider application.
This paper presents the creation of an assembly simulation environment with multisensory feedback (auditory and visual), and the evaluation of the effects of auditory and visual feedback on the task performance in the context of assembly simulation in a virtual environment (VE). This VE experimental system platform brings together complex technologies such as constraint-based assembly simulation, optical motion tracking technology, and real time 3D sound generation technology around a virtual reality workbench and a common software platform. A peg-in-a-hole and a Sener electronic box assembly task have been used as the task cases to conduct the human factor experiment, using sixteen participants. Both objective performance data (i.e., task completion time, TCT; and human performance error rate, HPER) and subjective opinions (i.e., questionnaires) on the utilization of auditory and visual feedback in a virtual assembly environment (VAE) have been gathered from the experiment. Results showed that the introduction of auditory and/or visual feedback into VAE did improve the assembly task performance. They also indicated that integrated feedback (auditory plus visual) offered better assembly task performance than either feedback used in isolation. Most participants preferred integrated feedback to either individual feedback (auditory or visual) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had a negative effect on the task performance, and easily made them frustrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.