Unbiased global illumination methods based on stochastical techniques provide photorealistic images. However, they are prone to noise that can only be reduced by increasing the number of processed samples. The problem of finding the number of samples that are required in order to ensure that most observers cannot perceive any noise is still an open issue. In this article, we address this problem focusing on visual perception of noise. However, rather than using known perceptual models, we investigate the use of learning approaches classically used in the field of Artificial Intelligence. Hence, we propose to use such approaches to create a model which is able to learn which image highlights perceptual noise. The learning is performed through the use of a database of examples based on experimentations of noise perception with human users. This model can then be used in any progressive stochastic global illumination method in order to find the visual convergence threshold of different parts of an input image.
The aim of realistic image synthesis is to produce high fidelity images that authentically represent real scenes. As these images are produced for human observers, we may exploit the fact that not everything is perceived when viewing scene with our eyes. Thus, it is clear that taking advantage of the limited capacity of the human visual system (HVS), can significantly contribute to optimize rendering software.Global illumination methods are used to simulate realistic lighting in 3D scenes. They generally provide a progressive convergence to high-quality solution. One of the problem of such algorithms is to determine a stopping condition, for deciding if calculations reached a satisfactory convergence allowing the process to terminate.In this paper, we propose and we discuss different solutions to this important problem. We show different techniques based on the Visual Difference Predictor (VDP) proposed by Daly [Daly 1993] to define a perceptual stopping condition for rendering computations. We use the VDP to measure the perceived differences between rendered images and to guide the Path Tracing rendering to satisfy a perceptual quality. Also, in a controlled experimental setting with real subjects, we validate our results.
National audienceFacilitation is proved to improve group problem-solving effectiveness, research new concepts and innovative solutions. This paper addresses facilitation role during a creative session in the digital environment. We evaluate existing collaborative software based on the metaphor of the sticky note and we show that they are designed for use without necessarily taking into account the presence of a facilitator. They offer however, a set of functions that are relevant to the group and its individuals. According to our findings, we suggest a set of principal features to fully integrate the role of the facilitator into digital creativity sessions.L'animation des sessions de créativité permet d'améliorer l'efficacité d'un groupe pour la résolution de problèmes, la recherche de nouveaux concepts ou de solutions innovantes. Ce papier aborde le rôle de l'animateur pendant une session de créativité dans un environnement numérique. Nous évaluons les logiciels collaboratifs existants qui utilisent la métaphore du Post-it® et nous montrons que ces applications ont été conçues pour être utilisées sans forcément tenir compte de la présence d'un animateur. Ils offrent cependant un ensemble de fonctions qui sont pertinentes pour le groupe et ses individus. En nous basant sur ces constatations, nous proposons un ensemble de fonctionnalités principales pour intégrer pleinement le rôle de l'animateur aux applications digitales lors des sessions de créativité
We present design principles for conceiving tangible user interfaces for the interactive physically-based deformation of 3D models. Based on these design principles, we developed a first prototype using a passive tangible user interface that embodies the 3D model. By associating an arbitrary reference material with the user interface, we convert the displacements of the user interface into forces required by physically-based deformation models. These forces are then applied to the 3D model made out of any material via a physical deformation model. In this way, we compensate for the absence of direct haptic feedback, which allows us to use a force-driven physically-based deformation model. A user N. Takouachet ( ) · N. Couture · P. Reuter · P. Joyot · G. Rivière · N. study on simple deformations of various metal beams shows that our prototype is usable for deformation with the user interface embodying the virtual beam. Our first results validate our design principles, plus they have a high educational value for mechanical engineering lectures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.