2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.458
|View full text |Cite
|
Sign up to set email alerts
|

Learning Discriminative αβ-Divergences for Positive Definite Matrices

Abstract: Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possible performance. Further and as a result of their overwhelming complexity for large-scale problems, computing pairwise s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 56 publications
(165 reference statements)
0
6
0
Order By: Relevance
“…In particular, visual representations often rely on SPD manifolds, such as the kernel matrix, the covariance descriptor [26], and the diffusion tensor image [8]. Optimization problems on SPD manifolds are especially important in the medical imaging field [4,8]. Furthermore, optimization problems in hyperbolic spaces are important in natural language processing.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, visual representations often rely on SPD manifolds, such as the kernel matrix, the covariance descriptor [26], and the diffusion tensor image [8]. Optimization problems on SPD manifolds are especially important in the medical imaging field [4,8]. Furthermore, optimization problems in hyperbolic spaces are important in natural language processing.…”
Section: Introductionmentioning
confidence: 99%
“…The input of a standard Euclidean-space-learning classifier (e.g., the support vector machine) is a feature vector that lies in the Euclidean space, while the SPD manifold is clearly not a Euclidean space. Most existing SPD manifold classification methods convert an SPD representation to a requisite vector by the tangent approximation [7], [8], the kernel method [9], [10], [11], [12], or the coding technique [13], [14], [15]. These methods are not superior because the vector operation on an SPD representation inevitably distorts the matrix structure.…”
mentioning
confidence: 99%
“…The training data used for JHMDB is considerably larger than the other tests and is included to see how our algorithms perform on higher dimensional data. GBWML, Log-Euclidean Metric Learning (LEML) and and Information-Divergence Dictionary Learning [132] (IDDL -Problem 2.21) are also included in the k-NN performance tests for relevant comparison. GBMWL is the most similar to our algorithms and provides the best comparison for their performance.…”
Section: Discussionmentioning
confidence: 99%
“…This approach learns a set of SPD atoms B i ∈ S n ++ that can be used to map from a test point X i ∈ S n ++ to a classification vector. Information Divergence Dictionary Learning (IDDL) [132,133] is a fairly recent dictionary learning algorithm that uses an information-theoretic formulation. The optimisation problem takes the form:…”
Section: Spd Metric Learningmentioning
confidence: 99%
See 1 more Smart Citation