We explore the feasibility of implementing stereoscopy-based 3D images with an eye-tracking-based light-field display and actual head-up display optics for automotive applications. We translate the driver’s eye position into the virtual eyebox plane via a “light-weight” equation to replace the actual optics with an effective lens model, and we implement a light-field rendering algorithm using the model-processed eye-tracking data. Furthermore, our experimental results with a prototype closely match our ray-tracing simulations in terms of designed viewing conditions and low-crosstalk margin width. The prototype successfully delivers virtual images with a field of view of 10° × 5° and static crosstalk of <1.5%.
We propose a single‐frame‐image‐, eye‐motion‐prediction‐based dynamic crosstalk measurement method for augmented reality 3D head‐up display application in fast‐moving cars. Consequently, we establish the criteria for 3D‐2D content conversion to avoid driver distraction. We quantify the driver movement speed threshold and propose a technique for seamless content conversion.
Augmented reality head-up displays (HUDs) require virtual-object distance matching to the real scene along an adequate field of view (FoV). At the same time, pupil-replication-based waveguide systems provide a wide FoV while affording compact HUDs. To provide 3D imaging and enable virtual-object distance matching in such waveguide systems, we propose a time-sequential autostereoscopic imaging architecture using a synchronized multi-view picture generation and eyebox formation units. Our simulation setup to validate the system feasibility yields an FoV of 15° × 7.5° with clear crosstalk-less images with a resolution of 60 pix/deg for each eye. Our proof-of-concept prototype with reduced specs yields results that are consistent with the simulation in terms of the viewing-zone formation. Thus, viewing zones for the left and right eyes in plane of the eyebox can be clearly observed. Finally, we discuss how the initial distance of the virtual image can be set for quantified fatigue-free 3D imaging, and an FoV can be further extended in such types of waveguide systems by varying parameters of the eyebox formation unit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.