This article develops a dynamic generalization of the nonuniform rational B-spline (NURBS) model. NURBS have become a defacto standard in commercial modeling systems because of their power to represent free-form shapes as well as common analytic shapes. To date, however, they have been viewed as purely geometric primitives that require the user to manually adjust multiple control points and associated weights in order to design shapes. Dynamic NURBS, or D-NURBS, are physics-based models that incorporate mass distributions, internal deformation energies, and other physical quantities into the popular NURBS geometric substrate. Using D-NURBS, a modeler can interactively sculpt curves and surfaces and design complex shapes to required specifications not only in the traditional indirect fashion, by adjusting control points and weights, but also through direct physical manipulation, by applying simulated forces and local and global shape constraints. D-NURBS move and deform in a physically intuitive manner in response to the user's direct manipulations. Their dynamic behavior results from the numerical integration of a set of nonlinear differential equations that automatically evolve the control points and weights in response to the applied forces and constraints. To derive these equations, we employ Lagrangian mechanics and a finite-element-like discretization. Our approach supports the trimming of D-NURBS surfaces using D-NURBS curves. We demonstrate D-NURBS models and constraints in applications including the rounding of solids, optimal surface fitting to unstructured data, surface design from cross sections, and free-form deformation. We also introduce a new technique for 2D shape metamorphosis using constrained D-NURBS surfaces.
Stony Brook (a) Conformal polycube map (b) Polycube T-spline (c) T-junctions on polycube spline (d) Close-up of control pointsFigure 1: Polycube spline for the Isidore Horse model. (a) The conformal polycube map serving as the parametric domain. (b) and (c) Polycube T-splines obtained via affine structure induced by the polycube map. Note that our polycube spline is globally defined as a "onepiece" shape representation without any cutting and gluing work except at the finite number of extraordinary points (corners of the polycube). The extraordinary points are colored in yellow in (b) and (c). The red curves on the spline surface (see (c)) highlight the T-junctions. (d) Close-up of the spline model overlaid with the control points. The polycube T-spline contains 12158 control points. The original model contains 150K vertices. The root-mean-square error is 0.07% of the diagonal of the model. AbstractThis paper proposes a new concept of polycube splines and develops novel modeling techniques for using the polycube splines in solid modeling and shape computing. Polycube splines are essentially a novel variant of manifold splines which are built upon the polycube map, serving as its parametric domain. Our rationale for defining spline surfaces over polycubes is that polycubes have rectangular structures everywhere over their domains except a very small number of corner points. The boundary of polycubes can be naturally decomposed into a set of regular structures, which facilitate tensor-product surface definition, GPU-centric geometric computing, and image-based geometric processing. We develop algorithms to construct polycube maps, and show that the introduced polycube map naturally induces the affine structure with a finite number of extraordinary points. Besides its intrinsic rectangular structure, the polycube map may approximate any original scanned data-set with a very low geometric distortion, so our method for building polycube splines is both natural and necessary, as its parametric domain can mimic the geometry of modeled objects in a topologically correct and geometrically meaningful manner. We design a new data structure that facilitates the intuitive and rapid construction of polycube splines in this paper. We demonstrate the polycube splines with applications in surface reconstruction and shape computing.
Polycube map is a global cross-surface parameterization technique, where the polycube shape can roughly approximate the geometry of modeled objects while retaining the same topology. The large variation of shape geometry and its complex topological type in real world applications make it difficult to effectively construct a high-quality polycube that can serve as a good global parametric domain for a given object. In practice, existing polycube-map construction algorithms typically require a large amount of user interaction for either pre-constructing the polycubes with great care or interactively specifying the geometric constraints to arrive at the user-satisfied maps. Hence, it is tedious and labor intensive to construct polycube maps for surfaces of complicated geometry and topology. This paper aims to develop an effective method to construct polycube maps for surfaces with complicated topology and geometry. Using our method, users can simply specify how close the target polycube mimics a given shape in a quantitative way. Our algorithm can both construct a similar polycube of high geometric fidelity and compute a high-quality polycube map in an automatic fashion. In addition, our method is theoretically guaranteed to output a one-to-one map. To demonstrate the efficacy of our method, we apply the automatically-constructed polycube maps in a number of computer graphics applications, such as seamless texture tiling, T-spline construction, and quadrilateral mesh generation.
This paper advocates a novel video saliency detection method based on the spatial-temporal saliency fusion and low-rank coherency guided saliency diffusion. In sharp contrast to the conventional methods, which conduct saliency detection locally in a frame-by-frame way and could easily give rise to incorrect low-level saliency map, in order to overcome the existing difficulties, this paper proposes to fuse the color saliency based on global motion clues in a batch-wise fashion. And we also propose low-rank coherency guided spatial-temporal saliency diffusion to guarantee the temporal smoothness of saliency maps. Meanwhile, a series of saliency boosting strategies are designed to further improve the saliency accuracy. First, the original long-term video sequence is equally segmented into many short-term frame batches, and the motion clues of the individual video batch are integrated and diffused temporally to facilitate the computation of color saliency. Then, based on the obtained saliency clues, inter-batch saliency priors are modeled to guide the low-level saliency fusion. After that, both the raw color information and the fused low-level saliency are regarded as the low-rank coherency clues, which are employed to guide the spatial-temporal saliency diffusion with the help of an additional permutation matrix serving as the alternative rank selection strategy. Thus, it could guarantee the robustness of the saliency map's temporal consistence, and further boost the accuracy of the computed saliency map. Moreover, we conduct extensive experiments on five public available benchmarks, and make comprehensive, quantitative evaluations between our method and 16 state-of-the-art techniques. All the results demonstrate the superiority of our method in accuracy, reliability, robustness, and versatility.
In this paper, we propose a new PDE-based methodology for deformable surfaces that is capable of automatically evolving its shape to capture the geometric boundary of the data and simultaneously discover its underlying topological structure. Our model can handle multiple types of data (such as volumetric data, 3D point clouds and 2D image data), using a common mathematical framework. The deformation behavior of the model is governed by partial differential equations (e.g. the weighted minimal surface flow). Unlike the level-set approach, our model always has an explicit representation of geometry and topology. The regularity of the model and the stability of the numerical integration process are ensured by a powerful Laplacian tangential smoothing operator. By allowing local adaptive refinement of the mesh, the model can accurately represent sharp features. We have applied our model for shape reconstruction from volumetric data, unorganized 3D point clouds and multiple view images. The versatility and robustness of our model allow its application to the challenging problem of multiple view reconstruction. Our approach is unique in its combination of simultaneous use of a high number of arbitrary camera views with an explicit mesh that is intuitive and easy-to-interact-with. Our model-based approach automatically selects the best views for reconstruction, allows for visibility checking and progressive refinement of the model as more images become available. The results of our extensive experiments on synthetic and real data demonstrate robustness, high reconstruction accuracy and visual quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.