2022
DOI: 10.7717/peerj-cs.1125
|View full text |Cite
|
Sign up to set email alerts
|

Deepfake attack prevention using steganography GANs

Abstract: Background Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural networ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…They made use of a Convolutional Neural Network (CNN) to automatically learn and identify portions inside a picture that were duplicated. Their methodology shows increased performance in comparison to conventional methods in terms of accuracy and speed, particularly for more complicated tampering cases [8].…”
Section: Literature Review A) Pevent Image Tamperingmentioning
confidence: 99%
See 1 more Smart Citation
“…They made use of a Convolutional Neural Network (CNN) to automatically learn and identify portions inside a picture that were duplicated. Their methodology shows increased performance in comparison to conventional methods in terms of accuracy and speed, particularly for more complicated tampering cases [8].…”
Section: Literature Review A) Pevent Image Tamperingmentioning
confidence: 99%
“…Deepfake images are created by employing generative models to generate synthetic images or movies that are extremely convincing. They proposed for the incorporation of AI approaches that could be explained to improve the interpretability of detection models, which would make the models more reliable and responsible [8].…”
Section: Literature Review A) Pevent Image Tamperingmentioning
confidence: 99%