Abstract. Let M be a complete Riemannian manifold and ν a probability measure on M . Assume 1 ≤ p ≤ ∞. We derive a new bound (in terms of p, the injectivity radius of M and an upper bound on the sectional curvatures of M ) on the radius of a ball containing the support of ν which ensures existence and uniqueness of the global Riemannian L p center of mass with respect to ν. A significant consequence of our result is that under the best available existence and uniqueness conditions for the so-called "local" L p center of mass, the global and local centers coincide. In our derivation we also give an alternative proof for a uniqueness result by W. S. Kendall. As another contribution, we show that for a discrete probability measure on M , under the existence and uniqueness conditions, the (global) L p center of mass belongs to the closure of the convex hull of the masses. We also give a refined result when M is of constant curvature.
Abstract. We study the problem of finding the global Riemannian center of mass of a set of data points on a Riemannian manifold. Specifically, we investigate the convergence of constant stepsize gradient descent algorithms for solving this problem. The challenge is that often the underlying cost function is neither globally differentiable nor convex, and despite this one would like to have guaranteed convergence to the global minimizer. After some necessary preparations we state a conjecture which, we argue is the best convergence condition (in a specific described sense) that one can hope for. The conjecture specifies conditions on the spread of the data points, step-size range, and the location of the initial condition (i.e., the region of convergence) of the algorithm. These conditions depend on the topology and the curvature of the manifold and can be conveniently described in terms of the injectivity radius and the sectional curvatures of the manifold. For 2-dimensional manifolds of nonnegative curvature and manifolds of constant nonnegative curvature (e.g., the sphere in R n and the rotation group in R 3 ) we show that the conjecture holds true. For more general manifolds we prove convergence results which are weaker than the conjectured one (but still superior over the available results). We also briefly study the effect of the configuration of the data points on the speed of convergence. Finally, we study the global behavior of the algorithm on certain manifolds proving (generic) convergence of the algorithm to a local center of mass with an arbitrary initial condition. An important aspect of our presentation is our emphasize on the effect of curvature and topology of the manifold on the behavior of the algorithm.
Abstract-In this paper we propose a discrete time protocol to align the states of a network of agents evolving in the space of rotations SO(3). The starting point of our work is Riemannian consensus, a general and intrinsic extension of classical consensus algorithms to Riemannian manifolds. Unfortunately, this algorithm is guaranteed to align the states only when the initial states are not too far apart. We show how to modify Riemannian consensus so that the states of the agents can be aligned, in practice, from almost any initial condition. While we focus on the specific case of SO(3), we hope that this work will represent the first step toward more general results.
Abstract. A class of simple Jacobi-type algorithms for non-orthogonal matrix joint diagonalization based on the LU or QR factorization is introduced. By appropriate parametrization of the underlying manifolds, i.e. using triangular and orthogonal Jacobi matrices we replace a high dimensional minimization problem by a sequence of simple one dimensional minimization problems. In addition, a new scale-invariant cost function for non-orthogonal joint diagonalization is employed. These algorithms are step-size free. Numerical simulations demonstrate the efficiency of the methods.
We investigate the sensitivity of the problem of Non-Orthogonal (matrix) Joint Diagonalization (NOJD). First, we consider the uniqueness conditions for the problem of Exact Joint Diagonalization (EJD), which is closely related to the issue of uniqueness in tensor decompositions. As a by-product, we derive the well-known identifiability conditions for Independent Component Analysis (ICA), based on an EJD formulation of ICA. We introduce some known cost functions for NOJD and derive flows based on these cost functions for NOJD. Then we define and investigate the noise sensitivity of the stationary points of these flows. We show that the condition number of the joint diagonalizer and uniqueness of the joint diagonalizer as measured by modulus of uniqueness (as defined in the paper) affect the sensitivity. We also investigate the effect of the number of matrices on the sensitivity. Our numerical experiments confirm the theoretical results. 1
Consensus algorithms are popular distributed algorithms for computing aggregate quantities, such as averages, in ad-hoc wireless networks. However, existing algorithms mostly address the case where the measurements lie in a Euclidean space. In this work we propose Riemannian consensus, a natural extension of the existing averaging consensus algorithm to the case of Riemannian manifolds. Unlike previous generalizations, our algorithm is intrinsic and, in principle, can be applied to any complete Riemannian manifold. We characterize our algorithm by giving sufficient convergence conditions on Riemannian manifolds with bounded curvature and we analyze the differences that rise with respect to the classical Euclidean case. We test the proposed algorithms on synthetic data sampled from manifolds such as the space of rotations, the sphere and the Grassmann manifold. DRAFT IEEE TRANSACTIONS ON AUTOMATIC CONTROL 3 conditions for the convergence of the proposed algorithms to a consensus configuration (i.e., where all the nodes converge to the same estimate). We also point out analogies and differences with respect to the Euclidean case.Our work has several important contributions with respect to the state of the art. First, our formulation is completely intrinsic, in the sense that it is not tied to a specific embedding of the manifold. Second, we consider more general (complete and not necessarily compact) Riemannian manifolds. Third, we provide sufficient conditions for the local and, in special cases, global convergence to the sub-manifold of consensus configurations. These conditions depend on the network connectivity, the geometric configuration of the measurements and the curvature of the manifold. We also provide stronger results that hold when additional assumptions on the manifold and network connectivity are made. Finally, we show that, while Euclidean consensus converges to the Euclidean mean of the initial measurements, the Riemannian extension does not converge to the Fréchet mean, which is the Riemannian equivalent of the Euclidean mean.Paper outline. In §II we review Euclidean consensus and relevant notions from Riemannian geometry and optimization. In §III we describe our extension of consensus algorithms to data in manifolds. Our main contributions are presented in §IV and §V. We first give convergence results for the case of general manifolds. We then strengthen our results for the particular case of manifolds with constant, non-negative curvature. In §VI we test the proposed algorithm on manifolds such as the special orthogonal group, the sphere and the Grassmann manifold. In the Appendix we report all the additional derivations and proofs that support the claims stated in the paper. II. MATHEMATICAL BACKGROUNDIn this section, we review some basic concepts related to Euclidean consensus, Riemannian geometry and optimization that are relevant to our development in the rest of the paper. A. Review of Euclidean consensusConsider a network with N nodes. We represent the network as a connected, undirected gr...
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.
We introduce a framework for defining a distance on the (non-Euclidean) space of Linear Dynamical Systems (LDSs). The proposed distance is induced by the action of the group of orthogonal matrices on the space of statespace realizations of LDSs. This distance can be efficiently computed for large-scale problems, hence it is suitable for applications in the analysis of dynamic visual scenes and other high dimensional time series. Based on this distance we devise a simple LDS averaging algorithm, which can be used for classification and clustering of time-series data.We test the validity as well as the performance of our groupaction based distance on synthetic as well as real data and provide comparison with state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.