This paper presents a rapid hierarchical radiosity algorithm for illuminating scenes containing large polygonal patches. The algorithm constructs a hierarchical representation of the form factor matrix by adaptively subdividing patches into subpatches according to a user-supplied error bound. The algorithm guarantees that all form factors are calculated to the same precision, removing many common image artifacts due to inaccurate form factors. More importantly, the algorithm decomposes the form factor matrix into at most O(n ) blocks (where n is the number of elements). Previous radiosity algorithms represented the element-to-element transport interactions with n 2 form factors. Visibility algorithms are given that work well with this approach. Standard techniques for shooting and gathering can be used with the hierarchical representation to solve for equilibrium radiosities, but we also discuss using a brightness-weighted error criteria, in conjunction with multigridding, to even more rapidly progressively refine the image.
We present a method, based on pre-computed light transport, for real-time rendering of objects under all-frequency, time-varying illumination represented as a high-resolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large low-frequency lights, with very soft shadows. Our main contribution is to approximate the environment map in a wavelet basis, keeping only the largest terms (this is known as a non-linear approximation). We obtain further compression by encoding the light transport matrix sparsely but accurately in the same basis. Rendering is performed by multiplying a sparse light vector by a sparse transport matrix, which is very fast. For accurate rendering, using non-linear wavelets is an order of magnitude faster than using linear spherical harmonics, the current best technique.
We present design principles for creating effective assembly instructions and a system that is based on these principles. The principles are drawn from cognitive psychology research which investigated people's conceptual models of assembly and effective methods to visually communicate assembly information. Our system is inspired by earlier work in robotics on assembly planning and in visualization on automated presentation design. Although other systems have considered presentation and planning independently, we believe it is necessary to address the two problems simultaneously in order to create effective assembly instructions. We describe the algorithmic techniques used to produce assembly instructions given object geometry, orientation, and optional grouping and ordering constraints on the object's parts. Our results demonstrate that it is possible to produce aesthetically pleasing and easy to follow instructions for many everyday objects.
Figure 1: Scene modeling using a context search. Left: A user modeling a scene places the blue box in the scene and asks for models that belong at this location. Middle: Our algorithm selects models from the database that match the provided neighborhood. Right: The user selects a model from the list and it is inserted into the scene. All models pictured in this paper are used with permission from Google 3D Warehouse. AbstractLarge corpora of 3D models, such as Google 3D Warehouse, are now becoming available on the web. It is possible to search these databases using a keyword search. This makes it possible for designers to easily include existing content into new scenes. In this paper, we describe a method for context-based search of 3D scenes. We first downloaded a large set of scene graphs from Google 3D Warehouse. These scene graphs were segmented into individual objects. We also extracted tags from the names of the models. Given the object shape, tags, and spatial relationship between pairs of objects, we can predict the strength of a relationship between a candidate model and an existing object in the scene. Using this function, we can perform context-based queries. The user specifies a region in the scene they are modeling using a 3D bounding box, and the system returns a list of related objects. We show that context-based queries perform better than keyword queries alone, and that without any keywords our algorithm still returns a relevant set of models.
This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously ereating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.