2020
DOI: 10.3389/fnbot.2019.00112
|View full text |Cite
|
Sign up to set email alerts
|

A Privacy-Preserving Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition

Abstract: Recently, multi-task learning (MTL) has been extensively studied for various face processing tasks, including face detection, landmark localization, pose estimation, and gender recognition. This approach endeavors to train a better model by exploiting the synergy among the related tasks. However, the raw face dataset used for training often contains sensitive and private information, which can be maliciously recovered by carefully analyzing the model and outputs. To address this problem, we propose a novel pri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(5 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Multi-task methods have become particularly popular with the advent of deep learning because of the unique ability of neural networks to transfer and share knowledge among various tasks. MTL has been widely used to simultaneously learn related tasks, such as: face detection + head pose estimation [97,102,103,165,166], face alignment + head pose estimation [93,94,[98][99][100], face detection + face alignment + head pose estimation [95,96,101], face detection + face alignment + head pose estimation + gender recognition [92,167], or also in combination with other tasks such as face recognition and appearance attributes estimation (age, smile, etc.) [52,75] and finally there is head pose estimation + gaze estimation [168].…”
Section: Multi-task Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-task methods have become particularly popular with the advent of deep learning because of the unique ability of neural networks to transfer and share knowledge among various tasks. MTL has been widely used to simultaneously learn related tasks, such as: face detection + head pose estimation [97,102,103,165,166], face alignment + head pose estimation [93,94,[98][99][100], face detection + face alignment + head pose estimation [95,96,101], face detection + face alignment + head pose estimation + gender recognition [92,167], or also in combination with other tasks such as face recognition and appearance attributes estimation (age, smile, etc.) [52,75] and finally there is head pose estimation + gaze estimation [168].…”
Section: Multi-task Methodsmentioning
confidence: 99%
“…Another drawback of almost all the datasets is the data imbalance issue: the distribution between easy frontal faces and more challenging orientations is heavily unbalanced. Techniques to increase the number of hard faces [195] or to enhance the contribution of hard examples (such as HEM [150]) can be used to alter the data distribution space and [82] Model based DCNN 3 300W-LP, AFLW2000, BIWI, CAS-PEAL, DriveFace 2019 Yang et al [88] DCNN 3 300W-LP, AFLW2000, BIWI 2020 Barra et al [130] Model based 3 AFLW, BIWI, Pointing'04 2020 Cao et al [76] DCNN 3 300W-LP, AFLW2000, BIWI 2020 Dai et al [90] DCNN 3 300W-LP, AFLW2000, BIWI 2020 Dapongy et al [83] Model based 3 300W, 300W-LP, AFLW2000, CelebA, WFLW 2020 Ewaisha et al [168] Multi-task DCNN 3 CAVE 2020 Valle et al [98] Multi-task DCNN 3 300W-LP a,c , AFLW a,c , AFLW2000 a , BIWI a , COFW c , WFLW a,c 2020 Wang et al [182] PnP Model based 3 300W, AFLW2000 2020 Zhang et al [183] DCNN 3 300W-LP, AFLW2000, BIWI 2020 Zhang et al [167] Multi-task DCNN 3 AFLW a,b,c 2020 Zhou et al [7] DCNN 3 300W-LP, AFLW2000, BIWI, CMU Panoptic 2021 Albiero et al [166] Multi-task DCNN 3 300W-LP a , AFLW2000 a , BIWI a , WIDER Multi-task DCNN + ASM 3 300W a,b , WFLW a,b 2021 Hu et al [185] DCNN 3 300W-LP, AFLW2000, BIWI 2021 Khan et al [80] Segmentation based Soft-max classifier 3 AFLW, BU, ICT-3DHP, Pointing'04 2021 Liu et al [85] Multi-task DCNN 3 AFLW c , AFLW2000 a , WIDER * 2021 Naina Dhingra [186] DCNN 3 300W-LP, AFLW2000, BIWI 2021 Ruan et al [87] Model based 3DMM + DCNN 3 300W-LP a,c,g , AFLW2000 ⋄• * , Florence g 2021 Sheka et al [91] DCNN 3 300W-LP, AFLW, AFLW2000, BIWI 2021 Viet et al [102] Multi-task DCNN 3 300W-LP a,b , BIWI a,b , CMU Panoptic a,b 2021 Viet et al [69] DCNN 3 300W-LP, AFLW2000, CMU Panoptic, UET-Headpose 2021 Xia et al [99] Multi-task DCNN 3 300W-LP a , 300VW c , WFLW c , WIDER b 2021 Xin et al [187] Model based Graph CNN 3 300W-LP, AFLW2000, BIWI 2021 Wu et al [86] Model based 3DMM + DCNN 3 300W-LP a,c,g , 300VW g , AFLW c , AFLW2000 a,c , Florence g 2022 Cantarini et al [188] Model based D...…”
Section: Datasetsmentioning
confidence: 99%
“…In stochastic gradient descent, dropout [45] is equivalent to updating the weights using D∇ℓ as opposed to ∇ℓ, where D is a randomized 'mask' setting some values of ∇ℓ to 0 [24]. T Steps O(T ) [55,58] [3] SGD -Step Size [24] Step Size α O(α) [55, 58] -SGD -Model Averaging [24] α…”
Section: Stability Enhancing Methodsmentioning
confidence: 99%
“…Multi-task learning (MTL) is a method proposed to tackle training multiple related tasks at the same time. It works by sharing knowledge between these tasks to make each model perform better [4]. Essentially, MTL acts like a behind-the-scenes helper, enhancing the ability of machine learning (ML) models to generalize different types of data [5].…”
Section: Introductionmentioning
confidence: 99%