Figure 1: Our texture generation process takes an example texture patch (left) and a random noise (middle) as input, and modifies this random noise to make it look like the given example texture. The synthesized texture (right) can be of arbitrary size, and is perceived as very similar to the given example. Using our algorithm, textures can be generated within seconds, and the synthesized results are always tileable. AbstractTexture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present an efficient algorithm for realistic texture synthesis. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. This permits us to apply texture synthesis to problems where it has traditionally been considered impractical. In particular, we have applied it to constrained synthesis for image editing and temporal texture generation. Our algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using tree-structured vector quantization.
Algorithms exist for synthesizing a wide variety of textures over rectangular domains. However, it remains difficult to synthesize general textures over arbitrary manifold surfaces. In this paper, we present a solution to this problem for surfaces defined by dense polygon meshes. Our solution extends Wei and Levoy's texture synthesis method [25] by generalizing their definition of search neighborhoods. For each mesh vertex, we establish a local parameterization surrounding the vertex, use this parameterization to create a small rectangular neighborhood with the vertex at its center, and search a sample texture for similar neighborhoods. Our algorithm requires as input only a sample texture and a target model. Notably, it does not require specification of a global tangent vector field; it computes one as it goes -either randomly or via a relaxation process. Despite this, the synthesized texture contains no discontinuities, exhibits low distortion, and is perceived to be similar to the sample texture. We demonstrate that our solution is robust and is applicable to a wide range of textures.
In this paper we present a general framework for performing constrained mesh deformation tasks with gradient domain techniques. We present a gradient domain technique that works well with a wide variety of linear and nonlinear constraints. The constraints we introduce include the nonlinear volume constraint for volume preservation, the nonlinear skeleton constraint for maintaining the rigidity of limb segments of articulated figures, and the projection constraint for easy manipulation of the mesh without having to frequently switch between multiple viewpoints. To handle nonlinear constraints, we cast mesh deformation as a nonlinear energy minimization problem and solve the problem using an iterative algorithm. The main challenges in solving this nonlinear problem are the slow convergence and numerical instability of the iterative solver. To address these issues, we develop a subspace technique that builds a coarse control mesh around the original mesh and projects the deformation energy and constraints onto the control mesh vertices using the mean value interpolation. The energy minimization is then carried out in the subspace formed by the control mesh vertices. Running in this subspace, our energy minimization solver is both fast and stable and it provides interactive responses. We demonstrate our deformation constraints and subspace deformation technique with a variety of constrained deformation examples.
In this paper we present a general framework for performing constrained mesh deformation tasks with gradient domain techniques. We present a gradient domain technique that works well with a wide variety of linear and nonlinear constraints. The constraints we introduce include the nonlinear volume constraint for volume preservation, the nonlinear skeleton constraint for maintaining the rigidity of limb segments of articulated figures, and the projection constraint for easy manipulation of the mesh without having to frequently switch between multiple viewpoints. To handle nonlinear constraints, we cast mesh deformation as a nonlinear energy minimization problem and solve the problem using an iterative algorithm. The main challenges in solving this nonlinear problem are the slow convergence and numerical instability of the iterative solver. To address these issues, we develop a subspace technique that builds a coarse control mesh around the original mesh and projects the deformation energy and constraints onto the control mesh vertices using the mean value interpolation. The energy minimization is then carried out in the subspace formed by the control mesh vertices. Running in this subspace, our energy minimization solver is both fast and stable and it provides interactive responses. We demonstrate our deformation constraints and subspace deformation technique with a variety of constrained deformation examples.
(a) plum stack (b) pebble sculpture (c) dish of corn kernels, carrots, beans (d) spaghetti Figure 1: Discrete element textures. Given a small input exemplar (left within each image), our method synthesizes a corresponding output with user specified coarse-scale domain (right within each image). Using a data driven approach, we can achieve a variety of effects, including (a) regular distribution, (b) output with different domain shape and boundary conditions from the input, (c) mixture of different elements, and (d) deformable and elongated shapes. AbstractA variety of phenomena can be characterized by repetitive small scale elements within a large scale domain. Examples include a stack of fresh produce, a plate of spaghetti, or a mosaic pattern.Although certain results can be produced via manual placement or procedural/physical simulation, these methods can be labor intensive, difficult to control, or limited to specific phenomena.We present discrete element textures, a data-driven method for synthesizing repetitive elements according to a small input exemplar and a large output domain. Our method preserves both individual element properties and their aggregate distributions. It is also general and applicable to a variety of phenomena, including different dimensionalities, different element properties and distributions, and different effects including both artistic and physically realistic ones. We represent each element by one or multiple samples whose positions encode relevant element attributes including position, size, shape, and orientation. We propose a sample-based neighborhood similarity metric and an energy optimization solver to synthesize desired outputs that observe not only input exemplars and output domains but also optional constraints such as physics, orientation fields, and boundary conditions. As a further benefit, our method can also be applied for editing existing element distributions.
Redirected walking techniques can enhance the immersion and visual-vestibular comfort of virtual reality (VR) navigation, but are often limited by the size, shape, and content of the physical environments. We propose a redirected walking technique that can apply to small physical environments with static or dynamic obstacles. Via a head- and eye-tracking VR headset, our method detects saccadic suppression and redirects the users during the resulting temporary blindness. Our dynamic path planning runs in real-time on a GPU, and thus can avoid static and dynamic obstacles, including walls, furniture, and other VR users sharing the same physical space. To further enhance saccadic redirection, we propose subtle gaze direction methods tailored for VR perception. We demonstrate that saccades can significantly increase the rotation gains during redirection without introducing visual distortions or simulator sickness. This allows our method to apply to large open virtual spaces and small physical environments for room-scale VR. We evaluate our system via numerical simulations and real user studies.
Figure 1: Poisson disk samples produced by our algorithm. The color images on the left are screen shots of our algorithm running on a GPU, producing 2D samples in a parallel and multi-resolution process. (Pixel colors represent sample locations with black color indicates absence of any sample.) The final set of 2D samples is shown in the middle. Our algorithm also works for arbitrary dimensions as demonstrated in the 3D case shown on the right (with samples visualized as small spheres). Our GPU implementation generates more than 4 million 2D samples/second and 555 thousand 3D samples/second. (2D case): r = 0.02, # samples = 1657. (3D case): r = 0.07, # samples = 1797. k = 4 for all images shown in this paper unless stated otherwise. AbstractSampling is important for a variety of graphics applications include rendering, imaging, and geometry processing. However, producing sample sets with desired efficiency and blue noise statistics has been a major challenge, as existing methods are either sequential with limited speed, or are parallel but only through pre-computed datasets and thus fall short in producing samples with blue noise statistics. We present a Poisson disk sampling algorithm that runs in parallel and produces all samples on the fly with desired blue noise properties. Our main idea is to subdivide the sample domain into grid cells and we draw samples concurrently from multiple cells that are sufficiently far apart so that their samples cannot conflict one another. We present a parallel implementation of our algorithm running on a GPU with constant cost per sample and constant number of computation passes for a target number of samples. Our algorithm also works in arbitrary dimension, and allows adaptive sampling from a user-specified importance field. Furthermore, our algorithm is simple and easy to implement, and runs faster than existing techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.