2016 IEEE International Conference on Multimedia and Expo (ICME) 2016
DOI: 10.1109/icme.2016.7552902
|View full text |Cite
|
Sign up to set email alerts
|

One-shot deep neural network for pose and illumination normalization face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(8 citation statements)
references
References 10 publications
0
8
0
Order By: Relevance
“…Likewise, [114] synthesised frontal face using 3D Generic Elastic Model (3DDEM) with texture mapping. [115] generate a frontal face from five facial landmark 3D mesh in a single reference. Deep learning models also explore 2D and 3D model fitting for pose normalisation.…”
Section: ) Normalizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Likewise, [114] synthesised frontal face using 3D Generic Elastic Model (3DDEM) with texture mapping. [115] generate a frontal face from five facial landmark 3D mesh in a single reference. Deep learning models also explore 2D and 3D model fitting for pose normalisation.…”
Section: ) Normalizationmentioning
confidence: 99%
“…The deep learning model was able to synthesis frontal faces from the training of several multi-posed data. [115] used the deep learning method to achieve pose and illumination normalisation, and they trained a deep neural network with face images generated from 3DGEM. [116] introduced the Face Frontalization Generative Adversarial model (FF-GAM) using 3DMM.…”
Section: ) Normalizationmentioning
confidence: 99%
“…For each group of neurons, a mini batch of images corresponding to changes in only a single scene variable is used for training. Wu et al [13] convert images into recon code representing poses and illumination conditions and then reconstruct images in the frontal view and the neutral lighting condition. With the aid of 3D face models, enormous training samples are generated to optimize their network.…”
Section: Related Workmentioning
confidence: 99%
“…In [12], the local pattern extraction layer and the illumination elimination layer are designed and integrated into a Convolutional Neural Network (CNN) to obtain illumination invariant feature maps. Wu et al [13] devise a multi-task DNN in order to complete the tasks of normalization and reconstruction. Generative Adversarial Network (GAN) with four types of loss function is utilized in [14] to generate images under several fixed illumination conditions.…”
Section: Introductionmentioning
confidence: 99%
“…Multimodal data has been studied for a variety of applications to analyze human behaviors, including person detection and identification [9,10], human action recognition [11,12], face recognition [13,14], as well as sentiment analysis.…”
Section: Related Workmentioning
confidence: 99%