Trees, bushes, and other plants are ubiquitous in urban environments, and realistic models of trees can add a great deal of realism to a digital urban scene. There has been much research on modeling tree structures, but limited work on reconstructing the geometry of real-world trees -even then, most works have focused on reconstruction from photographs aided by significant user interaction. In this paper, we perform active laser scanning of real-world vegetation and present an automatic approach that robustly reconstructs skeletal structures of trees, from which full geometry can be generated. The core of our method is a series of global optimizations that fit skeletal structures to the often sparse, incomplete, and noisy point data. A significant benefit of our approach is its ability to reconstruct multiple overlapping trees simultaneously without segmentation. We demonstrate the effectiveness and robustness of our approach on many raw scans of different tree varieties.
In this paper we present a novel approach for interactive rendering of large terrain datasets which is based on subdividing the terrain into rectangular patches at different resolutions. Each patch is represented by four triangular tiles which can be at different resolutions; and four strips which are used to stitch the four tiles in a seamless manner. As a result, our scheme maintains resolution changes within patches and not across patches. At runtime, the terrain patches are used to construct a level of detail based on view-parameters. The selected level of detail only includes the layout of the patches and the resolutions at boundary edges. Since adjacent patches agree on the resolution of common edges, the resulted mesh does not include any cracks or degenerate triangles. The GPU generates the meshes of the patches by using scaled instances of cached tiles and assigning elevation for each vertex from the cached textures. Our algorithm manages to achieve quality images at high frame rates while providing seamless transition between different levels of detail. Keywords:
Figure 1: Reconstruction of a scanned tree using our lobe-based tree representation: a) photograph; b) point set; c) lobe-based representation with 24 lobes (22 kB in total); d) synthesized tree (25 MB in total). AbstractWe present a lobe-based tree representation for modeling trees. The new representation is based on the observation that the tree's foliage details can be abstracted into canonical geometry structures, termed lobe-textures. We introduce techniques to (i) approximate the geometry of given tree data and encode it into a lobe-based representation, (ii) decode the representation and synthesize a fully detailed tree model that visually resembles the input. The encoded tree serves as a light intermediate representation, which facilitates efficient storage and transmission of massive amounts of trees, e.g., from a server to clients for interactive applications in urban environments. The method is evaluated by both reconstructing laser scanned trees (given as point sets) as well as re-representing existing tree models (given as polygons).
Trees, bushes, and other plants are ubiquitous in urban environments, and realistic models of trees can add a great deal of realism to a digital urban scene. There has been much research on modeling tree structures, but limited work on reconstructing the geometry of real-world trees -even then, most works have focused on reconstruction from photographs aided by significant user interaction. In this paper, we perform active laser scanning of real-world vegetation and present an automatic approach that robustly reconstructs skeletal structures of trees, from which full geometry can be generated. The core of our method is a series of global optimizations that fit skeletal structures to the often sparse, incomplete, and noisy point data. A significant benefit of our approach is its ability to reconstruct multiple overlapping trees simultaneously without segmentation. We demonstrate the effectiveness and robustness of our approach on many raw scans of different tree varieties.
In this paper we present persistent grid mapping (PGM), a novel framework for interactive view-dependent terrain rendering. Our algorithm is geared toward high utilization of modern GPUs, and takes advantage of ray tracing and mesh rendering. The algorithm maintains multiple levels of the elevation and color maps to achieve a faithful sampling of the viewed region. The rendered mesh ensures the absence of cracks and degenerate triangles that may cause the appearance of visual artifacts. In addition, an external texture memory support is provided to enable the rendering of terrains that exceed the size of texture memory. Our experimental results show that the PGM algorithm provides high quality images at steady frame rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.