2020
DOI: 10.1007/978-981-33-4673-4_54
|View full text |Cite
|
Sign up to set email alerts
|

Performance Analysis of Different Loss Function in Face Detection Architectures

Abstract: Masked face detection is a challenging task due to the occlusions created by the masks. Recent studies show that deep learning models can achieve effective performance for not only occluded faces but also for unconstrained environments, illuminations or various poses. In this study, we have addressed the problem of occlusion due to wearing masks in masked face detection technique in deep transfer learning method. We have also reviewed the recent deep learning models for face detection and considered VGG16, VGG… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…This model occupies storage of 87M and its reasoning speed is 26 targets per second. We compared the proposed knowledge distillation method with the other four classic technologies reported in recent years mentioned in Part 1, including: KD: knowledge distillation (Hinton et al, 2015); FitNet (Romero et al, 2014); SP: similarity preserving (Tung and Mori, 2019); CC: correlation congruence ; and CE: cross-entropy (Ferdous et al, 2020). Five neural networks with different parameter volumes and reasoning speeds were used as student networks.…”
Section: Experimental Results Of the Proposed Knowledge Distillation ...mentioning
confidence: 99%
“…This model occupies storage of 87M and its reasoning speed is 26 targets per second. We compared the proposed knowledge distillation method with the other four classic technologies reported in recent years mentioned in Part 1, including: KD: knowledge distillation (Hinton et al, 2015); FitNet (Romero et al, 2014); SP: similarity preserving (Tung and Mori, 2019); CC: correlation congruence ; and CE: cross-entropy (Ferdous et al, 2020). Five neural networks with different parameter volumes and reasoning speeds were used as student networks.…”
Section: Experimental Results Of the Proposed Knowledge Distillation ...mentioning
confidence: 99%
“…This model occupies storage of 87M and its reasoning speed is 26 targets per second. We compared the proposed knowledge distillation method with the other four classic technologies reported in recent years mentioned in Part 1, including: KD: knowledge distillation (Hinton et al, 2015); FitNet (Romero et al, 2014); SP: similarity preserving (Tung and Mori, 2019); CC: correlation congruence (Peng et al, 2019); and CE: cross-entropy (Ferdous et al, 2020). Five neural networks with different parameter volumes and reasoning speeds were used as student networks.…”
Section: Comparison With Classical Knowledge Distillation Methodsmentioning
confidence: 99%
“…Loss Function and Optimizer. The loss function is applied to evaluate the difference between the predicted and actual values of the model [33][34][35]. The smaller the difference, the smaller the cross-entropy.…”
Section: Multiscale Information Fusionmentioning
confidence: 99%