ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8683164
|View full text |Cite
|
Sign up to set email alerts
|

Exposing Deep Fakes Using Inconsistent Head Poses

Abstract: In this paper, we propose a new method to expose AIgenerated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
404
0
5

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 702 publications
(414 citation statements)
references
References 13 publications
1
404
0
5
Order By: Relevance
“…Level 1 --four CNN models [6] Level 2 --MesoNet [9] Level 3 abnormal physiological signal breathing, pulse, eye blinking [11], head positions [13]…”
Section: Level Class Combination With Forensics Typical Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…Level 1 --four CNN models [6] Level 2 --MesoNet [9] Level 3 abnormal physiological signal breathing, pulse, eye blinking [11], head positions [13]…”
Section: Level Class Combination With Forensics Typical Examplesmentioning
confidence: 99%
“…Li et al [12] used the color differences between GAN generated images and real images in a non-RGB color space to classify them. Xin et al [13] proposed a new method to detect and identify video forgery based on the inconsistency of the head positions. Peng et al [14] proposed a two-stream network for face tamper detection.…”
Section: Introductionmentioning
confidence: 99%
“…Computer science approaches address a number of tasks that are important for fighting misinformation at scale, such as identifying deep fakes (Yang et al, 2018), analysing provenance and spread patterns (Wu et al, 2016), and assisting humans (e.g. fact-checkers and journalists) with ranking and prioritisation (Wiebe and Riloff, 2005;Cazalens et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…There are some related work proposed to detect AI generated fake images or videos using deep networks. To detect DeepFake video, different detection methods have been proposed [4,5,6,7,8]. In addition, some works focus on the detection of GAN generated images [9,10,11,12].…”
Section: Related Workmentioning
confidence: 99%