2023
DOI: 10.1002/cav.2160
|View full text |Cite
|
Sign up to set email alerts
|

AEGAN: Generating imperceptible face synthesis via autoencoder‐based generative adversarial network

Abstract: Face recognition (FR) systems based on convolutional neural networks have shown excellent performance in human face inference. However, some malicious users may exploit such powerful systems to identify others' face images disclosed by victims' social network accounts, consequently obtaining private information. To address this emerging issue, synthesizing face protection images with visual and protective effects is essential. However, existing face protection methods encounter three critical problems: poor vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 26 publications
(44 reference statements)
0
1
0
Order By: Relevance
“…Deep learning-based visual object tracking can be broadly classified into two major categories: approaches based on convolutional neural networks [30] and those founded on transformer models. Siamese networks initially employ structurally identical [31] but nonweight-sharing feature extraction networks to extract features from template images and search images.…”
Section: Generic Visual Object Trackingmentioning
confidence: 99%
“…Deep learning-based visual object tracking can be broadly classified into two major categories: approaches based on convolutional neural networks [30] and those founded on transformer models. Siamese networks initially employ structurally identical [31] but nonweight-sharing feature extraction networks to extract features from template images and search images.…”
Section: Generic Visual Object Trackingmentioning
confidence: 99%
“…For data-driven methods to perform well in the real world and challenging conditions, they must be exposed to various environments during the training process [34]. Unfavorable weather, low light conditions, low resolution, dust on lenses, and camera vibration are among these conditions.…”
Section: Adverse Weather Models Generationmentioning
confidence: 99%
“…Secondly, utilizing smartphone cameras enhances the model's ability to adapt for implementation in human-driven vehicles, thus increasing its generalization. Thirdly, applying these masks to the dataset is also seen as a form of data augmentation, which helps prevent the model from overfitting during training [34]. Examples of mapping some of these masks at severe levels on the training and testing data are shown in Fig.…”
Section: Adverse Weather Models Generationmentioning
confidence: 99%