With the widespread use of 3D acquisition devices, there is an increasing need of consolidating captured noisy and sparse point cloud data for accurate representation of the underlying structures. There are numerous algorithms that rely on a variety of assumptions such as local smoothness to tackle this ill‐posed problem. However, such priors lead to loss of important features and geometric detail. Instead, we propose a novel data‐driven approach for point cloud consolidation via a convolutional neural network based technique. Our method takes a sparse and noisy point cloud as input, and produces a dense point cloud accurately representing the underlying surface by resolving ambiguities in geometry. The resulting point set can then be used to reconstruct accurate manifold surfaces and estimate surface properties. To achieve this, we propose a generative neural network architecture that can input and output point clouds, unlocking a powerful set of tools from the deep learning literature. We use this architecture to apply convolutional neural networks to local patches of geometry for high quality and efficient point cloud consolidation. This results in significantly more accurate surfaces, as we illustrate with a diversity of examples and comparisons to the state‐of‐the‐art.
Figure 1: Our example-based structure synthesis method can be used to generate structures with discrete elements (left), continuous geometries (middle), and their mixtures (right); on different domains such as surfaces, bounding volumes, or curves. AbstractWe present an example based geometry synthesis approach for generating general repetitive structures. Our model is based on a meshless representation, unifying and extending previous synthesis methods. Structures in the example and output are converted into a functional representation, where the functions are defined by point locations and attributes. We then formulate synthesis as a minimization problem where patches from the output function are matched to those of the example. As compared to existing repetitive structure synthesis methods, the new algorithm offers several advantages. It handles general discrete and continuous structures, and their mixtures in the same framework. The smooth formulation leads to employing robust optimization procedures in the algorithm. Equipped with an accurate patch similarity measure and dedicated sampling control, the algorithm preserves local structures accurately, regardless of the initial distribution of output points. It can also progressively synthesize output structures in given subspaces, allowing users to interactively control and guide the synthesis in real-time. We present various results for continuous/discrete structures and their mixtures, residing on curves, submanifolds, volumes, and general subspaces, some of which are generated interactively.
We propose a novel neural network architecture for point cloud classification. Our key idea is to automatically transform the 3D unordered input data into a set of useful 2D depth images, and classify them by exploiting well performing image classification CNNs. We present new differentiable module designs to generate depth images from a point cloud. These modules can be combined with any network architecture for processing point clouds. We utilize them in combination with state-of-the-art classification networks, and get results competitive with the state of the art in point cloud classification. Furthermore, our architecture automatically produces informative images representing the input point cloud, which could be used for further applications such as point cloud visualization.
SparserBlue to green noise 1 2 PCF 1 PCF 2 Weight Map Figure 1: Understanding natural or synthetic complex distributions such as the natural distribution of trees based on the altitude on the left is a difficult problem if the arrangement of the entities, resulting from pair-wise interactions, is spatially-adaptive, as shown for a canonical example in the middle. Our analysis technique provides an informative and comprehensive summary of such distributions. The correlations in our framework are represented with a set of extracted basis pair correlation functions (PCF 1 and 2, from local patches 1 and 2 in the example in the middle), and the corresponding weight maps illustrating how they are interpolated in space. Our synthesis algorithm utilizes these measures to synthesize distributions with adaptive density and correlations on Euclidean domains (right) or surfaces (left). AbstractAnalyzing and generating sampling patterns are fundamental problems for many applications in computer graphics. Ideally, point patterns should conform to the problem at hand with spatially adaptive density and correlations. Although there exist excellent algorithms that can generate point distributions with spatially adaptive density or anisotropy, the pair-wise correlation model, blue noise being the most common, is assumed to be constant throughout the space. Analogously, by relying on possibly modulated pair-wise difference vectors, the analysis methods are designed to study only such spatially constant correlations. In this paper, we present the first techniques to analyze and synthesize point patterns with adaptive density and correlations. This provides a comprehensive framework for understanding and utilizing general point sampling. Starting from fundamental measures from stochastic point processes, we propose an analysis framework for general distributions, and a novel synthesis algorithm that can generate point distributions with spatio-temporally adaptive density and correlations based on a locally stationary point process model. Our techniques also extend to general metric spaces. We illustrate the utility of the new techniques on the analysis and synthesis of real-world distributions, image reconstruction, spatio-temporal stippling, and geometry sampling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.