High dynamic range (HDR) imaging provides the capability of handling real world lighting as opposed to the traditional low dynamic range (LDR) which struggles to accurately represent images with higher dynamic range. However, most imaging content is still available only in LDR. This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet. ExpandNet accepts LDR images as input and generates images with an expanded range in an end‐to‐end fashion. The model attempts to reconstruct missing information that was lost from the original signal due to quantization, clipping, tone mapping or gamma correction. The added information is reconstructed from learned features, as the network is trained in a supervised fashion using a dataset of HDR images. The approach is fully automatic and data driven; it does not require any heuristics or human expertise. ExpandNet uses a multiscale architecture which avoids the use of upsampling layers to improve image quality. The method performs well compared to expansion/inverse tone mapping operators quantitatively on multiple metrics, even for badly exposed inputs.
In recent years many Tone Mapping Operators (TMOs) have been presented in order to display High Dynamic Range Images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The dual of tone mapping, inverse tone mapping, expands a Low Dynamic Range Image (LDRI) into a HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. The majority of today's media is stored in low dynamic range. Inverse Tone Mapping Operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image-based lighting. We propose an approximate solution to this problem that uses mediancut to find the areas considered of high luminance and subsequently apply a density estimation to generate an Expand-map in order to extend the range in the high luminance areas using an inverse Photographic Tone Reproduction operator.
The computation of high-fidelity images in real-time remains one of the key challenges for computer graphics. Recent work has shown that by understanding the human visual system, selective rendering may be used to render only those parts to which the human viewer is attending at high quality and the rest of the scene at a much lower quality. This can result in a significant reduction in computational time, without the viewer being aware of the quality difference. Selective rendering is guided by models of the human visual system, typically in the form of a 2D saliency map, which predict where the user will be looking in any scene. Computation of these maps themselves often take many seconds, thus precluding such an approach in any interactive system, where many frames need to be rendered per second. In this paper we present a novel saliency map which exploits the computational performance of modern GPUs. With our approach it is thus possible to calculate this map in milliseconds, allowing it to be part of a real time rendering system. In addition, we also show how depth, habituation and motion can be added to the saliency map to further guide the selective rendering. This ensures that only the most perceptually important parts of any animated sequence need be rendered in high quality. The rest of the animation can be rendered at a significantly lower quality, and thus much lower computational cost, without the user being aware of this difference.
Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.
In the last few years, researchers in the field of High Dynamic Range (HDR) Imaging have focused on providing tools for expanding Low Dynamic Range (LDR) content for the generation of HDR images
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.