2013
DOI: 10.1109/jstsp.2013.2261799
|View full text |Cite
|
Sign up to set email alerts
|

Newton Algorithms for Riemannian Distance Related Problems on Connected Locally Symmetric Manifolds

Abstract: Abstract-The squared distance function is one of the standard functions on which an optimization algorithm is commonly run, whether it is used directly or chained with other functions. Illustrative examples include center of mass computation, implementation of k-means algorithm and robot positioning. This function can have a simple expression (as in the Euclidean case), or it might not even have a closed form expression. Nonetheless, when used in an optimization problem formulated on non-Euclidean manifolds, t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(33 citation statements)
references
References 26 publications
0
33
0
Order By: Relevance
“…, Y N . As such, the task of computingŶ N is well-studied in recent literature, and can be carried out using algorithms based on deterministic line-search [21] [40] , or on stochastic gradient descent [41] [42]. The deterministic Riemannian gradient descent algorithm for computingŶ N is given below in formulae (65) and (66) of Paragraph IV-A, based on [21].…”
Section: B Statistical Inference Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…, Y N . As such, the task of computingŶ N is well-studied in recent literature, and can be carried out using algorithms based on deterministic line-search [21] [40] , or on stochastic gradient descent [41] [42]. The deterministic Riemannian gradient descent algorithm for computingŶ N is given below in formulae (65) and (66) of Paragraph IV-A, based on [21].…”
Section: B Statistical Inference Problemsmentioning
confidence: 99%
“…where Log Y denotes the Riemannian logarithm mapping, (whose expression is (40), given below). To prove (72), note that for all Y ∈ P m , Pm p(Z| Y, σ)dv(Z) = 1 (74) since p(Z| Y, σ), as defined by (20), is a probability density.…”
mentioning
confidence: 99%
“…This is here only briefly indicated. Expression (19c) is a slight improvement of the one in [15] (see Theorem IV.1, Page 636), where it is enough to note that if R is the curvature tensor of M , then the operator R v (u) = R(v, u)v has the eigenvalues 0 and (λ(a)) 2 for each λ ∈ ∆ + , whenever v, u ∈ T x M p with v = Ad(s) a [7] [12]. It is well-known, by properties of the Jacobi equation [6], that H x (φ(s, a)) has the same eigenspace decomposition as R v , in this case.…”
Section: Proof Of Corollarymentioning
confidence: 94%
“…In conclusion, the EM algorithm (45a)-(45c) provides an approach to the problem of density estimation in M, which can be expected to offer a suitable rate of convergence and which is not greedy in terms of memory. The main computational requirement of this algorithm is the ability to find Riemannian barycentres, a task for which there exists an increasing number of high-performance routines [42][44][45] [11][31] [32]. The fact that the EM algorithm reduces the problem of probability density estimation in M to one of repeated computation of Riemannian barycentres is due to the unique connection which exists between Gaussian distributions in M and the concept of Riemannian barycentre.…”
Section: Comparison To Kernel Density Estimationmentioning
confidence: 99%