2022
DOI: 10.1007/978-3-031-19778-9_41
|View full text |Cite
|
Sign up to set email alerts
|

Detecting and Recovering Sequential DeepFake Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 44 publications
0
2
0
Order By: Relevance
“…These techniques often rely on discrepancies between the body/ head of the underlying person, and the deepfake face that is superimposed onto the original video. These include quality differences, where the inner face appears blurred or pixelated compared to the surrounding video (Younus & Hasan, 2020), differences in skin tone or texture between the original and new face (Ajoy et al, 2021), and inconsistent blending at the contours of the face (Shao et al, 2022). Similar artifacts are reported by humans who correctly identify deepfakes (Wöhler et al, 2021).…”
Section: Puppetry Deepfakes For Emotion Perception Researchmentioning
confidence: 76%
“…These techniques often rely on discrepancies between the body/ head of the underlying person, and the deepfake face that is superimposed onto the original video. These include quality differences, where the inner face appears blurred or pixelated compared to the surrounding video (Younus & Hasan, 2020), differences in skin tone or texture between the original and new face (Ajoy et al, 2021), and inconsistent blending at the contours of the face (Shao et al, 2022). Similar artifacts are reported by humans who correctly identify deepfakes (Wöhler et al, 2021).…”
Section: Puppetry Deepfakes For Emotion Perception Researchmentioning
confidence: 76%
“…Shao et al [20] proposed the Seq-DeepFake Transformer (SeqFakeFormer) to detect forged images. First, they captured spatial manipulation traces of the image through self-attention modules in the transformer encoder, and then added the Spatially Enhanced Cross-Attention (SECA) module to generate different spatial weight maps for corresponding manipulations to carry out cross-attention.…”
Section: Detection Based On Video Intraframe Featuresmentioning
confidence: 99%