2018
DOI: 10.1109/tpami.2017.2776154
|View full text |Cite
|
Sign up to set email alerts
|

Cross Euclidean-to-Riemannian Metric Learning with Application to Face Recognition from Video

Abstract: Abstract-Riemannian manifolds have been widely employed for video representations in visual classification tasks including videobased face recognition. The success mainly derives from learning a discriminant Riemannian metric which encodes the non-linear geometry of the underlying Riemannian manifolds. In this paper, we propose a novel metric learning framework to learn a distance metric across a Euclidean space and a Riemannian manifold to fuse the average appearance and pattern variation of faces within one … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 69 publications
(29 citation statements)
references
References 45 publications
0
25
0
Order By: Relevance
“…Florian et al proposed the FaceNet with Triplet loss [48], which explicitly maximizes the inter-class distance and meanwhile minimizes the intra-class distance, where a margin term is used to determine the decision boundaries between positive and negative pairs. These methods and their improved versions have achieved good results in face recognition, and have also been applied to metric learning [26,47,67], fine-grained visual recognition [13,32,69] and person re-identification [22,33,36].…”
Section: Face Recognitionmentioning
confidence: 99%
“…Florian et al proposed the FaceNet with Triplet loss [48], which explicitly maximizes the inter-class distance and meanwhile minimizes the intra-class distance, where a margin term is used to determine the decision boundaries between positive and negative pairs. These methods and their improved versions have achieved good results in face recognition, and have also been applied to metric learning [26,47,67], fine-grained visual recognition [13,32,69] and person re-identification [22,33,36].…”
Section: Face Recognitionmentioning
confidence: 99%
“…Referring to references [26,54,55], we needed to make strictly positive. Therefore, we let * , where E is the unit matrix and = trace 10 .…”
Section: Of 21mentioning
confidence: 99%
“…Lastly, unlike the RPNet proposed by reference [49], our RPCC does not involve the fusion of raw HSI data into an SVM, and there is no nonlinear activation operation. We used the same method as in references [29,54] to approximate the matrices in the Euclidean space. The source code will be released soon (https://github.com/whuyang/RPCC).…”
Section: Classification Based On Spectral-spatial Featuresmentioning
confidence: 99%
“…However, DLSC algorithms rely on the linear operations of data points and traditional DLSC algorithms only deal with the data in Euclidean vector space. With the development of machine learning algorithms based on non-Euclidean data, the data on Riemannian manifolds such as Symmetric Positive Definite (SPD) manifold, Stiefel manifold and Grassmann manifold, has been widely used in computer vision tasks such as image set classification [5], face recognition [6], action recognition [7], object detection [8]. And Riemannian data has been proven to be more robust as the feature descriptors for images and videos than traditional Euclidean feature vectors.…”
Section: Introductionmentioning
confidence: 99%
“…And Riemannian data has been proven to be more robust as the feature descriptors for images and videos than traditional Euclidean feature vectors. Hence, Riemannian machine learning algorithms obtain extensive researches in the recent years and are successful in kernel learning [9], [10], [11], metric learning [12], [13], discriminant analysis [14], dimensionality reduction [15], and so on. We study the DSLC algorithm on SPD manifold in this paper.…”
Section: Introductionmentioning
confidence: 99%