Abstract-We introduce a new class of shape approximation techniques for irregular triangular meshes. Our method approximates the geometry of the mesh using a linear combination of a small number of basis vectors. The basis vectors are functions of the mesh connectivity and of the mesh indices of a number of anchor vertices. There is a fundamental difference between the bases generated by our method and those generated by geometry-oblivious methods, such as Laplacian-based spectral methods. In the latter methods, the basis vectors are functions of the connectivity alone. The basis vectors of our method, in contrast, are geometry-aware, since they depend on both the connectivity and on a binary tagging of vertices that are "geometrically important" in the given mesh (e.g., extrema). We show that by defining the basis vectors to be the solutions of certain least-squares problems, the reconstruction problem reduces to solving a single sparse linear least-squares problem. We also show that this problem can be solved quickly using a state-of-the-art sparse-matrix factorization algorithm. We show how to select the anchor vertices to define a compact effective basis from which an approximated shape can be reconstructed. Furthermore, we develop an incremental update of the factorization of the least-squares system. This allows a progressive scheme where an initial approximation is incrementally refined by a stream of anchor points. We show that the incremental update and solving the factored system are fast enough to allow an on-line refinement of the mesh geometry.
We present a new out-of-core sparse symmetric-indefinite factorization algorithm. The most significant innovation of the new algorithm is a dynamic partitioning method for the sparse factor. This partitioning method results in very low I/O traffic and allows the algorithm to run at high computational rates, even though the factor is stored on a slow disk. Our implementation of the new code compares well with both high-performance in-core sparse symmetric-indefinite codes and a high-performance out-of-core sparse Cholesky code.
Abstract. The four existing stable factorization methods for symmetric indefinite matrices suffer serious defects when applied to banded matrices. Partial pivoting (row or column exchanges) maintains a band structure in the reduced matrix and the factors, but destroys symmetry completely once an off-diagonal pivot is used. Two-by-two block pivoting and Gaussian reduction to tridiagonal (Aasen's algorithm) maintain symmetry at all times, but quickly destroy the band structure in the reduced matrices. Orthogonal reductions to tridiagonal maintain both symmetry and the band structure, but are too expensive for linear-equation solvers.We propose a new pivoting method, which we call snap-back pivoting. When applied to banded symmetric matrices, it maintains the band structure (like partial pivoting does), it keeps the reduced matrix symmetric (like 2-by-2 pivoting and reductions to tridiagonal), and it is fast.Snap-back pivoting reduces the matrix to a diagonal form using a sequence of elementary elimination steps, most of which are applied symmetrically from the left and from the right (but some are applied unsymmetrically).In snap-back pivoting, if the next diagonal element is too small, the next pivoting step might be unsymmetric, leading to asymmetry in the next row and column of the factors. But the reduced matrix snaps back to symmetry once the next step is completed.Key words. symmetric-indefinite matrices, pivoting, banded matrices, matrix factorizations, element growth AMS subject classifications. 15A06, 15A23, 65F05DOI. 10.1137/040610106 1. Introduction. We propose a new method for the direct solution of a linear system of equations Ax = b where A is an n-by-n banded symmetric indefinite matrix with half bandwidth m. The method performs O(nm 2 ) work. Our method reduces A to a diagonal matrix by eliminating one or two rows and columns in each step. After each elimination step, the reduced matrix is a banded symmetric matrix with half bandwidth at most 2m. Although at each step the reduced matrix is symmetric, the factors corresponding to steps in which we eliminate two columns together may not be symmetric. The algorithm requires a mixture of symmetric and unsymmetric data. The element growth in the reduced matrices is bounded by 4 n−1 . Elements of the factors are bounded by 3 or by the elements of the reduced matrices. Our method achieves these goals using an intricate elimination scheme that employs both Gaussian row and column operations and Givens rotations.Our method reduces the matrix to a diagonal one or two rows/columns at a time. If the next diagonal element is large, an ordinary symmetric Gaussian elimination step will reduce the next row and column. Such a step adds a column to the left factor and its transpose to the right factor. Since a symmetric matrix is subtracted from the symmetric trailing submatrix, it remains symmetric. If the next diagonal
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.