2017
DOI: 10.1016/j.patcog.2017.01.011
|View full text |Cite
|
Sign up to set email alerts
|

A weakly supervised method for makeup-invariant face verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(16 citation statements)
references
References 23 publications
0
15
0
Order By: Relevance
“…Additionally, the hand-crafted feature has used other fields of image processing and computer vision such as [15]. [2,3,11,[16][17][18][19]. In [2], external features are injected into fully connected layer of a dCNN to achieve better verification accuracies across large age gaps.…”
Section: Hand-crafted Feature Based Approachesmentioning
confidence: 99%
See 3 more Smart Citations
“…Additionally, the hand-crafted feature has used other fields of image processing and computer vision such as [15]. [2,3,11,[16][17][18][19]. In [2], external features are injected into fully connected layer of a dCNN to achieve better verification accuracies across large age gaps.…”
Section: Hand-crafted Feature Based Approachesmentioning
confidence: 99%
“…In [2], external features are injected into fully connected layer of a dCNN to achieve better verification accuracies across large age gaps. A triplet deep network is presented in [11] to verify face images across makeup variations. Coupled autoencoders have been used in [16] for face recognition across aging variations.…”
Section: Hand-crafted Feature Based Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…) A recurring theme in unsupervised learning is the use of self-(or meta-) supervision (Pathak et al, 2016;Larsson et al, 2016;Zhang et al, 2016;Doersch et al, 2015;Gao et al, 2016;Misra et al, 2016;Wang & Gupta, 2015). This refers to a network trained for a pretext (or proxy) task, which is not of direct interest, but significantly relates to the final high-level task, e.g., object detection, classification, and action recognition (Girshick, 2015;Simonyan & Zisserman, 2014;Sun et al, 2017;Gkioxari et al, 2015). Automatic image colorization (Larsson et al, 2016;Zhang et al, 2016) is a typical example of a pretext task; naturally colorizing grey images requires prior knowledge of natural image appearance.…”
Section: Introductionmentioning
confidence: 99%