“…To reduce ghost images, a PVB film in the two glass layers on the windshield is modified into a wedge; from the traditional equal thickness, it is changed to be thick on the top and thin on the bottom, showing a wedge angle. After the light refracted through the first surface of the glass touches the second surface of the windshield glass, the reflection height and angle also change, making the two virtual images formed by the incident light on the first and second surfaces coincide, thus reducing the phenomenon of double shadow [47]. Moreover, utilizing a partial transparent Fresnel reflector as the combiner could also eliminate ghost images with the Fresnel pattern [48], since it can prevent a ghost image from reaching the eyes of drivers (Figure 5b).…”
Section: Virtual Image Distance Within 10 Mmentioning
As the next generation of in-vehicle intelligent platforms, the augmented reality heads-up display (AR-HUD) has a huge information interaction capacity, can provide drivers with auxiliary driving information, avoid the distractions caused by the lower head during the driving process, and greatly improve driving safety. However, AR-HUD systems still face great challenges in the realization of multi-plane full-color display, and they cannot truly achieve the integration of virtual information and real road conditions. To overcome these problems, many new devices and materials have been applied to AR-HUDs, and many novel systems have been developed. This study first reviews some key metrics of HUDs, investigates the structures of various picture generation units (PGUs), and finally focuses on the development status of AR-HUDs, analyzes the advantages and disadvantages of existing technologies, and points out the future research directions for AR-HUDs.
“…To reduce ghost images, a PVB film in the two glass layers on the windshield is modified into a wedge; from the traditional equal thickness, it is changed to be thick on the top and thin on the bottom, showing a wedge angle. After the light refracted through the first surface of the glass touches the second surface of the windshield glass, the reflection height and angle also change, making the two virtual images formed by the incident light on the first and second surfaces coincide, thus reducing the phenomenon of double shadow [47]. Moreover, utilizing a partial transparent Fresnel reflector as the combiner could also eliminate ghost images with the Fresnel pattern [48], since it can prevent a ghost image from reaching the eyes of drivers (Figure 5b).…”
Section: Virtual Image Distance Within 10 Mmentioning
As the next generation of in-vehicle intelligent platforms, the augmented reality heads-up display (AR-HUD) has a huge information interaction capacity, can provide drivers with auxiliary driving information, avoid the distractions caused by the lower head during the driving process, and greatly improve driving safety. However, AR-HUD systems still face great challenges in the realization of multi-plane full-color display, and they cannot truly achieve the integration of virtual information and real road conditions. To overcome these problems, many new devices and materials have been applied to AR-HUDs, and many novel systems have been developed. This study first reviews some key metrics of HUDs, investigates the structures of various picture generation units (PGUs), and finally focuses on the development status of AR-HUDs, analyzes the advantages and disadvantages of existing technologies, and points out the future research directions for AR-HUDs.
“…Additionally, it is important to highlight that the solutions addressed by this paper do not rely on eye-tracking. For example, the work of Lee et al [ 19 ] proposes a light-field-based 3D HUD that uses eye-tracking. Furthermore, there are immersive HMIs in the scientific literature that show promise for the automotive context, such as the light field prototype of Duarte et al [ 22 ], yet are not designed particularly for the investigated utilization.…”
Section: Projection-based Light Field Visualization Technologymentioning
confidence: 99%
“…According to the best knowledge of the authors, this is the first work to consider V2X sensor data in the context of automotive light field visualization. Related research efforts study the hardware-in-the-loop simulation of autonomous vehicles [ 12 ], light field imaging for autonomous underwater vehicles (AUVs) [ 13 , 14 ], zero-latency motion visualization [ 15 ], and AR [ 16 , 17 , 18 , 19 , 20 ] automotive head-up displays (HUDs). Light field HUDs and windshields (or windscreens) are addressed by several works [ 21 , 22 , 23 , 24 , 25 , 26 ]; however, none of them study V2X sensor data.…”
The practical usage of V2X communication protocols started emerging in recent years. Data built on sensor information are displayed via onboard units and smart devices. However, perceptually obtaining such data may be counterproductive in terms of visual attention, particularly in the case of safety-related applications. Using the windshield as a display may solve this issue, but switching between 2D information and the 3D reality of traffic may introduce issues of its own. To overcome such difficulties, automotive light field visualization is introduced. In this paper, we investigate the visualization of V2X communication protocols and use cases via projection-based light field technology. Our work is motivated by the abundance of V2X sensor data, the low latency of V2X data transfer, the availability of automotive light field prototypes, the prevalent dominance of non-autonomous and non-remote driving, and the lack of V2X-based light field solutions. As our primary contributions, we provide a comprehensive technological review of light field and V2X communication, a set of recommendations for design and implementation, an extensive discussion and implication analysis, the exploration of utilization based on standardized protocols, and use-case-specific considerations.
“…Currently, with the requirement of high resolution and precision in modern optical systems, such as James Webb Space Telescope (JWST) which was launched in 2021, Extreme Ultraviolet Lithography (EUL) and Head-Up Display (HUD) [1][2][3], the complexity and quantity of optical components are also increasing. In the processing of complex optical surfaces, frequent measurement of the surface shape of the workpiece is required.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.