2017 12th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2017) 2017
DOI: 10.1109/fg.2017.12
|View full text |Cite
|
Sign up to set email alerts
|

LDF-Net: Learning a Displacement Field Network for Face Recognition across Pose

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 18 publications
0
7
0
Order By: Relevance
“…They devised a novel network, namely DR-GAN, which simultaneously synthesizes frontal faces and learn pose-invariant feature representations. Hu et al [52] proposed to directly transform a non-frontal face into frontal face by Learning a Displacement Field network (LDF-Net). LDF-Net achieves state-of-the-art performance for face recognition across poses on Multi-PIE, especially at large poses.…”
Section: Unconstrained Face Recognitionmentioning
confidence: 99%
See 2 more Smart Citations
“…They devised a novel network, namely DR-GAN, which simultaneously synthesizes frontal faces and learn pose-invariant feature representations. Hu et al [52] proposed to directly transform a non-frontal face into frontal face by Learning a Displacement Field network (LDF-Net). LDF-Net achieves state-of-the-art performance for face recognition across poses on Multi-PIE, especially at large poses.…”
Section: Unconstrained Face Recognitionmentioning
confidence: 99%
“…Here, we consider pose and expression variations, and conduct two experiments. In the first experiment, following the setting of [3], [75], probe images consist of the images of all 337 [78] and LDF-Net [52]. The first three matchers are publicly available.…”
Section: Face Identification On Multi-pie Databasementioning
confidence: 99%
See 1 more Smart Citation
“…Brief Description Subsettings one to many generating many patches or images of the pose variability from a single image 3D model [139], [137], [165], [166], [53] [67], [197], [196] 2D deep model [279], [267], [182] data augmentation [124], [276], [51] [222], [187], [188], [192], [202] many to one recovering the canonical view of face images from one or many images of nonfrontal view SAE [101], [264], [240] CNN [278], [280], [89], [37], [246] GAN [91], [198], [41], [249]…”
Section: Data Processingmentioning
confidence: 99%
“…Yim et al [246] proposed a multi-task network that can rotate an arbitrary pose and illumination image to the target-pose face image by utilizing the user's remote code. [89] transformed nonfrontal face images to frontal images according to the displacement field of the pixels between them. Zhou et al [275] proposed a novel non-rigid face rectification method by local homography transformations, and regularized it by imposing natural frontal face distribution with a denoising autoencoder.…”
Section: B Many-to-one Normalizationmentioning
confidence: 99%