In this paper we introduce a new light reflection model for image synthesis based on experimental studies of surface gloss perception. To develop the model, we've conducted two experiments that explore the relationships between the physical parameters used to describe the reflectance properties of glossy surfaces and the perceptual dimensions of glossy appearance. In the first experiment we use multidimensional scaling techniques to reveal the dimensionality of gloss perception for simulated painted surfaces. In the second experiment we use magnitude estimation methods to place metrics on these dimensions that relate changes in apparent gloss to variations in surface reflectance properties. We use the results of these experiments to rewrite the parameters of a physically-based light reflection model in perceptual terms. The result is a new psychophysically-based light reflection model where the dimensions of the model are perceptually meaningful, and variations along the dimensions are perceptually uniform. We demonstrate that the model can facilitate describing surface gloss in graphics rendering applications. This work represents a new methodology for developing light reflection models for image synthesis.
, and the third study is described for the first time in this article. These studies reveal how users view the ranked results on a search engine results page (SERP), the relationship between the search result abstracts viewed and those clicked on, and whether gender, search task, or search engine influence these behaviors. In addition, we discuss a key challenge that arose in all three studies that applies to the use of eye tracking in studying online behaviors which is due to the limited support for analyzing scanpaths, or sequences of eye fixations. To meet this challenge, we present a preliminary approach that involves a graphical visualization to compare a path with a group of paths. We conclude by summarizing our findings and discussing future work in further understanding online search behavior with the help of eye tracking.
Using ab initio calculations, we have investigated the influence of stress and defects on the reconstruction of the ͑001͒ Si-terminated surface of cubic SiC. We find that an unstrained bulk is terminated by a p(2ϫ1) reconstruction under tensile stress. This stress can be substantially relieved by the removal of dimers. Applying further tensile stress lowers the surface symmetry and leads to a c(4ϫ2) pattern. The structural properties of this reconstruction are in very good agreement with recent measurements, suggesting that stress in SiC samples is responsible for the c(4ϫ2) reconstruction observed experimentally. Furthermore, we have analyzed temperature and charging effects on the surface properties and made a comparative study of theoretical and experimental STM images.
2.2m triangles: 300 rows, 900 columns, 16.9 s 388k triangles: 432 rows, 864 columns, 13.5 s 869k triangles: 100 rows, 200 columns, 3.8 s Figure 1: In the above images, over 1.9 million surface samples are shaded from over 100 thousand point lights in a few seconds. This is achieved by sampling a few hundred rows and columns from the large unknown matrix of surface-light interactions. AbstractRendering complex scenes with indirect illumination, high dynamic range environment lighting, and many direct light sources remains a challenging problem. Prior work has shown that all these effects can be approximated by many point lights. This paper presents a scalable solution to the many-light problem suitable for a GPU implementation. We view the problem as a large matrix of samplelight interactions; the ideal final image is the sum of the matrix columns. We propose an algorithm for approximating this sum by sampling entire rows and columns of the matrix on the GPU using shadow mapping. The key observation is that the inherent structure of the transfer matrix can be revealed by sampling just a small number of rows and columns. Our prototype implementation can compute the light transfer within a few seconds for scenes with indirect and environment illumination, area lights, complex geometry and arbitrary shaders. We believe this approach can be very useful for rapid previewing in applications like cinematic and architectural lighting design.
This paper presents an interactive GPU-based system for cinematic relighting with multiple-bounce indirect illumination from a fixed view-point. We use a deep frame-buffer containing a set of view samples, whose indirect illumination is recomputed from the direct illumination on a large set of gather samples, distributed around the scene. This direct-to-indirect transfer is a linear transform which is particularly large, given the size of the view and gather sets. This makes it hard to precompute, store and multiply with. We address this problem by representing the transform as a set of sparse matrices encoded in wavelet space. A hierarchical construction is used to impose a wavelet basis on the unstructured gather cloud, and an image-based approach is used to map the sparse matrix computations to the GPU. We precompute the transfer matrices using a hierarchical algorithm and a variation of photon mapping in less than three hours on one processor. We achieve high-quality indirect illumination at 10-20 frames per second for complex scenes with over 2 million polygons, with diffuse and glossy materials, and arbitrary direct lighting models (expressed using shaders). We compute per-pixel indirect illumination without the need of irradiance caching or other subsampling techniques
This paper presents an image editing framework where users use reference images to indicate desired color edits. In our approach, users specify pairs of strokes to indicate corresponding regions in both the original and the reference image that should have the same color "style". Within each stroke pair, a nonlinear constrained parametric transfer model is used to transfer the reference colors to the original. We estimate the model parameters by matching color distributions, under constraints that ensure no visual artifacts are present in the transfer result. To perform transfer on the whole image, we employ optimization methods to propagate the model parameters defined at each stroke location to spatially-close regions of similar appearance. This stroke-based formulation requires minimal user effort while retaining the high degree of user control necessary to allow artistic interpretations. We demonstrate our approach by performing color transfer on a number of image pairs varying in content and style, and show that our algorithm outperforms state-of-the-art color transfer methods on both user-controllability and visual qualities of the transfer results.
We present a thorough study to evaluate different light field editing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light fields, which may make common image editing tasks become complex in light field space. We additionally investigate the potential benefits of using depth information when editing, and the limitations imposed by imperfect depth reconstruction using current techniques. We perform two different experiments, collecting both objective and subjective data from a varied number of editing tasks of increasing complexity based on local point-and-click tools. In the first experiment, we rely on perfect depth from synthetic light fields, and focus on simple edits. This allows us to gain basic insight on light field editing, and to design a more advanced editing interface. This is then used in the second experiment, employing real light fields with imperfect reconstructed depth, and covering more advanced editing tasks. Our study shows that users can edit light fields with our tested interface and tools, even in the presence of imperfect depth. They follow different workflows depending on the task at hand, mostly relying on a combination of different depth cues. Last, we confirm our findings by asking a set of artists to freely edit both real and synthetic light fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.