In this paper, we propose a new system for a real-time holographic augmented reality (AR) video service based on a photorealistic three-dimensional (3D) object point for multiple users to use simultaneously at various locations and viewpoints. To observe the object from all viewpoints, a camera system capable of acquiring the 3D volume of a real object is developed and is used to generate a real object in real-time. Using the normal of the object point, the observable points are mapped to the viewpoint at which the user is located, and a hologram based on the object point is generated. The angle at which the reflected light from each point is incident on the hologram plane is calculated, and the intensity of the interference light is adjusted according to the angle to generate a hologram with a higher 3D effect. The generated hologram is transmitted to each user to provide a holographic AR service. The entire system consists of a camera system comprising eight RGB-D (depth) cameras and two workstations for photorealistic 3D volume and hologram generation. Using this technique, a realistic hologram was generated. Through experiments displaying holograms simultaneously from several different viewpoints, it is confirmed that multiple users can concurrently receive hologram AR.
We propose a new learning and inferring model that generates digital holograms using deep neural networks (DNNs). This DNN uses a generative adversarial network, trained to infer a complex two-dimensional fringe pattern from a single object point. The intensity and fringe patterns inferred for each object point were multiplied, and all the fringe patterns were accumulated to generate a perfect hologram. This method can achieve generality by recording holograms for two spaces (16 Space and 32 Space). The reconstruction results of both spaces proved to be almost the same as numerical computer-generated holograms by showing the performance at 44.56 and 35.11 dB, respectively. Through displaying the generated hologram in the optical equipment, we proved that the holograms generated by the proposed DNN can be optically reconstructed.
This paper proposes a coding method for compressing a phase-only hologram video (PoHV), which can be directly displayed in a commercial phase-only spatial light modulator. Recently, there has been active research to use a standard codec as an anchor to develop a new video coding for 3D data such as MPEG point cloud compression. The main merit of this approach is that if a new video codec is developed, the performance of relative coding methods can be increased simultaneously. Furthermore, compatibility is increased by the capability to use various anchor codecs, and the developing time is decreased. This paper uses a currently used video codec as an anchor codec and develops a coding method including progressive scaling and a deep neural network to overcome low temporal correlation between frames of a PoHV. Since it is difficult to temporally predict a correlation between frames of a PoHV, this paper adopts a scaling function and a neural network in the encoding and decoding process, not adding complexity to an anchor itself to predict temporal correlation. The proposed coding method shows an enhanced coding gain of an average of 22%, compared with an anchor in all coding conditions. When observing numerical and optical reconstructions, the result images by the proposed show clearer objects and less juddering than the result by the anchor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.