2022
DOI: 10.1109/taffc.2020.2969189
|View full text |Cite
|
Sign up to set email alerts
|

Deep Multi-Task Multi-Label CNN for Effective Facial Attribute Classification

Abstract: Facial Attribute Classification (FAC) has attracted increasing attention in computer vision and pattern recognition. However, state-of-the-art FAC methods perform face detection/alignment and FAC independently. The inherent dependencies between these tasks are not fully exploited. In addition, most methods predict all facial attributes using the same CNN network architecture, which ignores the different learning complexities of facial attributes. To address the above problems, we propose a novel deep multi-tas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 39 publications
(30 citation statements)
references
References 40 publications
(57 reference statements)
0
25
0
Order By: Relevance
“…In this section, we compare the proposed SSPL method with ten state-of-the-art methods, including five supervised FAR methods [22,27,20,2,14], three self-supervised learning methods [3,24,10], and two semi-supervised learning methods [28,23], on the CelebA and LFWA 2.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we compare the proposed SSPL method with ten state-of-the-art methods, including five supervised FAR methods [22,27,20,2,14], three self-supervised learning methods [3,24,10], and two semi-supervised learning methods [28,23], on the CelebA and LFWA 2.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Sharma and Foroosh [27] leverage deep separable convolutions and pointwise convolution to design a lightweight CNN for FAR, which significantly reduces the model parameters and improves the computational efficiency. Mao et al [22] perform FAR based on a Deep Multi-task and Multi-label Convolutional Neural Network (DMM-CNN). He et al [14] propose to use synthesized abstraction images to improve the FAR performance.…”
Section: Related Workmentioning
confidence: 99%
“…Mao et al, propose a new algorithm to deal with the face attribute extraction from facial images, the algorithm called deep multi-task multi-label CNN (DMM-CNN), by dividing the facial attributes in two categories objective and subjective they manage to run two different network architectures taking advantage of multitask learning, and adopting dynamic weighting scheme to resolve the problem of diverse learning complexities [8]. In the same field Ehrlich and Shields propose there multitask learning facial attributes approach [15], while Mandel, Pascanu propose a method based on shared features between these attributes, by using multi-task restricted boltzmann machine (MT-RBM) [16], they were able to learning a joint feature representation from facial landmark points for all attributes, following by approach subsist of a bottomup/top-down pass for learning the shared representation of multitask models, and bottom-up pass for prediction of tasks.…”
Section: Related Workmentioning
confidence: 99%
“…The main property of FAC [5] is to predict multiple face features, state and emotion [6] on the given image or face portrait. Various algorithms have reached an excellent result on multiple levels for FAC, either apply directly CNN [7] models to extract face features, or using methods for improving the learning by distributing the attributes into two categories: objective attributes like wearing a hat, eyeglasses, bangs and subjective ones like smiling, big lips [8]. Some methods focus on grouping some attributes on the basis of their intercorrelations [9], while others target detecting face landmark localization [10] to reduce the noise.…”
Section: Introductionmentioning
confidence: 99%
“…To find out which level deep representation is the best for FAC when fused with the high-level one, we make 3 sets of experiments: In a word, the proposed method shows superiority in FAC. [15] 91.26 320 MCNN-AUX [15] 91.29 -MCFA [16] 91.23 -DMM-CNN [17] 91.70 -PS-MCNN [18] 92.22 16 Our Method 92.35 16…”
Section: Ablation Study Level Selection Of Deep Representationmentioning
confidence: 99%