In this paper we present a new technique for the display of High Dynamic Range (HDR) images on Low Dynamic Range (LDR) displays. The described process has three stages. First, the input image is segmented into luminance zones. Second, the tone mapping operator (TMO) that performs better in each zone is automatically selected. Finally, the resulting tone mapping (TM) outputs for each zone are merged, generating the final LDR output image. To establish the TMO that performs better in each luminance zone we conducted a preliminary psychophysical experiment using a set of HDR images and six different TMOs. We validated our composite technique on several (new) HDR images and conducted a further psychophysical experiment, using an HDR display as reference, that establishes the advantages of our hybrid three-stage approach over a traditional individual TMO.
Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of image quality.In this paper we investigate the relationship between audio beat rate and video frame rate in order to manipulate temporal visual perception. This represents an initial step towards establishing a comprehensive understanding for the audio-visual integration in multisensory environments.
Few tone mapping operators (TMOs) take color management into consideration, limiting compression to luminance values only. This could lead to changes in image chroma and hues, which are typically managed with a post-processing step. However, current post-processing techniques for tone reproduction do not explicitly consider the target display gamut. Gamut mapping, on the other hand, deals with mapping images from one color gamut to another, usually smaller, gamut but has traditionally focused on smaller scale, chromatic changes. The authors present a combined gamut- and tone-management framework for color-accurate reproduction of high dynamic range images that can prevent hue and luminance shifts while taking gamut boundaries into consideration. Their approach is conceptually and computationally simple, parameter-free, and compatible with existing TMOs.
<div>
<div>
<div>
<p>Classification of 3D objects – the selection of a category in which each object belongs – is of great interest in the field of
machine learning. Numerous researchers use deep neural networks to address this problem, altering the network architecture and
representation of the 3D shape used as an input. To investigate the effectiveness of their approaches, we conduct an extensive survey
of existing methods and identify common ideas by which we categorize them into a taxonomy. Second, we evaluate 11 selected
classification networks on three 3D object datasets, extending the evaluation to a larger dataset on which most of the selected
approaches have not been tested yet. For this, we provide a framework for converting shapes from common 3D mesh formats into
formats native to each network, and for training and evaluating different classification approaches on this data. Despite being generally
unable to reach the accuracies reported in the original papers, we can compare the relative performance of the approaches as well as
their performance when changing datasets as the only variable to provide valuable insights into performance on different kinds of data.
We make our code available to simplify running training experiments with multiple neural networks with different prerequisites.
</p>
</div>
</div>
</div>
A major obstacle for real-time rendering of high-fidelity graphics is computational complexity. A key point to consider in the pursuit of “realism in real time” in computer graphics is that the Human Visual System (HVS) is a fundamental part of the rendering pipeline. The human eye is only capable of sensing image detail in a 2
ˆ
foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. These points of interest are prioritized based on the saliency of the objects in the scene or the task the user is performing. Such “glimpses” of a scene are then assembled by the HVS into a coherent, but inevitably imperfect, visual perception of the environment. In this process, much detail, that the HVS deems unimportant, may literally go unnoticed.
Visual science research has identified that movement in the background of a scene may substantially influence how subjects perceive foreground objects. Furthermore, recent computer graphics work has shown that both fixed viewpoint and dynamic scenes can be selectively rendered without any perceptual loss of quality, in a significantly reduced time, by exploiting knowledge of any high-saliency movement that may be present. A high-saliency movement can be generated in a scene if an otherwise static objects starts moving. In this article, we investigate, through psychophysical experiments, including eye-tracking, the perception of rendering quality in dynamic complex scenes based on the introduction of a moving object in a scene. Two types of object movement are investigated: (i) rotation in place and (ii) rotation combined with translation. These were chosen as the simplest movement types. Future studies may include movement with varied acceleration. The object's geometry and location in the scene are not salient. We then use this information to guide our high-fidelity selective renderer to produce perceptually high-quality images at significantly reduced computation times. We also show how these results can have important implications for virtual environment and computer games applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.