2018 IEEE International Workshop on Information Forensics and Security (WIFS) 2018
DOI: 10.1109/wifs.2018.8630787
|View full text |Cite
|
Sign up to set email alerts
|

In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking

Abstract: The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos. In this work, we describe a new method to expose fake face videos generated with neural networks. Our method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Our method is tested over benchmarks of eye-blinking detection datasets and also show promising performan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
369
0
6

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 716 publications
(418 citation statements)
references
References 30 publications
0
369
0
6
Order By: Relevance
“…In response, the forensics community has been working on detecting such generated content [4], [12], [13], [14]. Marra et al [4] propose to use raw pixel and conventional forensics features extracted from real and fake images to train a classifier.…”
Section: Related Workmentioning
confidence: 99%
“…In response, the forensics community has been working on detecting such generated content [4], [12], [13], [14]. Marra et al [4] propose to use raw pixel and conventional forensics features extracted from real and fake images to train a classifier.…”
Section: Related Workmentioning
confidence: 99%
“…Other methods used networks proposed by their authors [53,26,54,22,18,24,21] while others are based on a hybrid approach [55,56,57,28]. Beside deep-learning and non-deep-learning categorization, these methods could be divided into image-based classifiers [53,55,26,54,22,23,56,57,28] and video-based classifiers [18,24,20,21]. For detecting images generated by GANs, Marra et al [58] performed benchmark testing on several CNNs and proposed a statistical model for detection [59].…”
Section: Computer-generated/manipulated Image/video Detectionmentioning
confidence: 99%
“…Automatic feature extraction has dramatically improved detection performance [14,15] while deep generative methods like generative adversarial networks (GANs) enable images [3] and videos [16,17] to be produced that are almost humanly impossible to detect as fake. The attention of the forensics community has thus shifted to these new kinds of attacks [18,19,20,21]. Several approaches are image-based [22,23] while others work only on videos frames [18,20,21] or on video frames and voice information [24].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations