We propose a Tone Mapping Operator, denoted HMD-TMO, dedicated to the visualization of 360 • High Dynamic Range images on Head Mounted Displays. The few existing studies about this topic have shown that the existing Tone Mapping Operators for classic 2D images are not adapted to 360 • High Dynamic Range images. Consequently, several dedicated operators have been proposed. Instead of operating on the entire 360 • image, they only consider the part of the image currently viewed by the user. Tone mapping a part of the 360 • image is less challenging as it does not preserve the global luminance dynamic of the scene. To cope with this problem, we propose a novel tone mapping operator which takes advantage of both a view-dependant tone mapping that enhances the contrast, and a Tone Mapping Operator applied to the entire 360 • image that preserves global coherency. Furthermore, we present a subjective study to model lightness perception in a Head Mounted Display.
We propose a new Tone Mapping Operator dedicated to the visualization of 360 • High Dynamic Range images on Head-Mounted Displays. Previous work around this topic has shown that the existing Tone Mapping Operators for classic 2D images are not adapted to 360 • High Dynamic Range images. Consequently, several dedicated operators have been proposed. Instead of operating on the entire 360 • image, they only consider the part of the image currently viewed by the user. Tone mapping a part of the 360 • image is less challenging as it does not preserve globally the dynamic range of the luminance of the scene. To cope with this problem, we propose a novel Tone Mapping Operator which takes advantage of 1) a view-dependant tone mapping that enhances the contrast, and 2) a Tone Mapping Operator applied to the entire 360 • image that preserves the global coherency. Furthermore, the proposed Tone Mapping Operator is adapted to the human eye perception of the luminance on Head-Mounted Displays. We present two subjective studies to model the lightness perception on such Head-Mounted Displays.
Example-based colour transfer between images, which has raised a lot of interest in the past decades, consists of transferring the colour of an image to another one. Many methods based on colour distributions have been proposed, and more recently, the efficiency of neural networks has been demonstrated again for colour transfer problems. In this paper, we propose a new pipeline with methods adapted from the image domain to automatically transfer the colour from a target point cloud to an input point cloud. These colour transfer methods are based on colour distributions and account for the geometry of the point clouds to produce a coherent result. The proposed methods rely on simple statistical analysis, are effective, and succeed in transferring the colour style from one point cloud to another. The qualitative results of the colour transfers are evaluated and compared with existing methods.
Gaze behavior of virtual characters in video games and virtual reality experiences is a key factor of realism and immersion. Indeed, gaze plays many roles when interacting with the environment; not only does it indicate what characters are looking at, but it also plays an important role in verbal and non-verbal behaviors and in making virtual characters alive. Automated computing of gaze behaviors is however a challenging problem, and to date none of the existing methods are capable of producing close-to-real results in an interactive context. We therefore propose a novel method that leverages recent advances in several distinct areas related to visual saliency, attention mechanisms, saccadic behavior modelling, and head-gaze animation techniques. Our approach articulates these advances to converge on a multi-map saliency-driven model which offers real-time realistic gaze behaviors for non-conversational characters, together with additional user-control over customizable features to compose a wide variety of results. We first evaluate the benefits of our approach through an objective evaluation that confronts our gaze simulation with ground truth data using an eye-tracking dataset specifically acquired for this purpose. We then rely on subjective evaluation to measure the level of realism of gaze animations generated by our method, in comparison with gaze animations captured from real actors. Our results show that our method generates gaze behaviors that cannot be distinguished from captured gaze animations. Overall, we believe that these results will open the way for more natural and intuitive design of realistic and coherent gaze animations for real-time applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.