The objective of this paper is to provide an overview about recent trends in the area of three-dimensional television . This includes the application of new 3-D data representation formats, which are inherently interactive and more flexible than the traditional (two-view) stereoscopic image. In this context, we describe an experimental 3-DTV system that is based on the joint distribution of monoscopic color video and associated per-pixel depth information. From these data, one or more "virtual" views of a real-world scene can be synthesized in real-time at the receiver side (i.e., in a
3-DTV set-top box) by means of so-called depth-image-based rendering (DIBR) techniques. In addition, the paper provides details on the latest advances in glasses-free (autostereoscopic) 3-DTV display development, both for single and multiple users, as well as on multimodel user interfaces based on head, gaze, or gesture tracking.
Keywords-(Auto)stereoscopic 3-D displays, coding and broadcast transmission, depth-image-based rendering (DIBR), interactive media, three-dimensional television (3-DTV).
A novel simulation tool has been developed for spatial multiplexed 3D displays. Main purpose of our software is the 3D display design with optical image splitter in particular lenticular grids or wavelength-selective barriers. As a result of interaction of image splitter with ray emitting displays a spatial light-modulator generating the autostereoscopic image representation was modeled. Based on the simulation model the interaction of optoelectronic devices with the defined spatial planes is described. Time-sequential multiplexing enables increasing the resolution of such 3D displays. On that reason the program was extended with an intermediate data cumulating component. The simulation program represents a stepwise quasi-static functionality and control of the arrangement. It calculates and renders the whole display ray emission and luminance distribution on viewing distance. The degree of result complexity will increase by using wavelength-selective barriers. Visible images at the viewer's eye positon were determined by simulation after every switching operation of optical image splitter. The summation and evaluation of the resulting data is processed in correspondence to the equivalent time sequence. Hereby the simulation was expanded by a complex algorithm for automated search and validation of possible solutions in the multi-dimensional parameter space. For the multiview 3D display design a combination of ray-tracing and 3D rendering was used. Therefore the emitted light intensity distribution of each subpixel will be evaluated by researching in terms of color, luminance and visible area by using different content distribution on subpixel plane. The analysis of the accumulated data will deliver different solutions distinguished by standards of evaluation
The paper gives an overview of our multimodal 3D display technology. Enabling technologies such as head tracking, shifting and scaling for content adaptation are briefly described using the example of a single‐user display. These new solutions can be applied to multiview and lightfield 3D displays as they can be found in the market independent of the used technology for image separation like parallax barrier, lenticular or holographic optical element. Beyond that dual view or integral imaging displays can be manipulated with similar results by slightly different means. In general, the new algorithms enable the use of content originally produced for glasses type 3D displays without the need of additional calculation of interpolated views.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.