2017 IEEE International Joint Conference on Biometrics (IJCB) 2017
DOI: 10.1109/btas.2017.8272686
|View full text |Cite
|
Sign up to set email alerts
|

AFFACT: Alignment-free facial attribute classification technique

Abstract: Facial attributes are soft-biometrics that allow limiting the search space, e.g., by rejecting identities with nonmatching facial characteristics such as nose sizes or eyebrow shapes. In this paper, we investigate how the latest versions of deep convolutional neural networks, ResNets, perform on the facial attribute classification task. We test two loss functions: the sigmoid cross-entropy loss and the Euclidean loss, and find that for classification performance there is little difference between these two. Us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
48
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(49 citation statements)
references
References 27 publications
1
48
0
Order By: Relevance
“…In addition, our method is competitive to existing state-of-the-art methods. As stated before, our method is not directly comparable to the methods: AFFACT [13] and of Sun et al [29] as these methods rely on multiple number of CNNs. Moreover, our method is generic and complementary in nature and can be applied in any architecture of CNNs.…”
Section: Quantitative Evaluationmentioning
confidence: 89%
See 1 more Smart Citation
“…In addition, our method is competitive to existing state-of-the-art methods. As stated before, our method is not directly comparable to the methods: AFFACT [13] and of Sun et al [29] as these methods rely on multiple number of CNNs. Moreover, our method is generic and complementary in nature and can be applied in any architecture of CNNs.…”
Section: Quantitative Evaluationmentioning
confidence: 89%
“…Aug. + Dropout) 90.9% Proposed (Geo. Aug. + Dropout + AugLabel) 91.2% Face Tracer [17] 81.1% LeNet+ANet [20] 87.3% MOON [26] 90.9% Walk and Learn [30] 88.7% MCNN [14] 91.2% AFFACT [13] 91.5% Kalayeh et al [15] 91.2% Sun et al [29] 91.6% Fig. 3 shows qualitative visualisations of some of the predictions made by the baseline (Geo.…”
Section: Methodsmentioning
confidence: 99%
“…More importantly, this approach can handle in-the-wild input images with complex illumination and occlusions, and no extra cropping and aligning operations are needed. Ding et al [18] propose a cascade network to locate face regions according to different attributes and perform FAE simultaneously with no need to align faces [31]. Li et al [63] design an AFFAIR network for learning a hierarchy of spatial transformations and predicting facial attributes without landmarks.…”
Section: Face Detection and Alignmentmentioning
confidence: 99%
“…The experiments designed in this section assess how well the proposed models are able to confound gender classifiers that were unseen during training. These six gender classifiers include three models that were already trained: a commercial-of-the-shelf gender classifier (G-COTS), IntraFace [55], AFFACT [56], and three CNN models built in-house, which we refer to as CNN-1, CNN-2 (trained using MORPH-train and LFW, respectively), and CNN-3 (trained on the union of MORPH-train and LFW). Note that these three CNN models have shown a similar level of performance on the original test-sets, compared to the other three pre-trained gender predictors.…”
Section: Performance In Confounding Unseen Gender Classifiersmentioning
confidence: 99%