2018 25th IEEE International Conference on Image Processing (ICIP) 2018
DOI: 10.1109/icip.2018.8451143
|View full text |Cite
|
Sign up to set email alerts
|

Video Codec Forensics Based on Convolutional Neural Networks

Abstract: The recent development of multimedia has made video editing accessible to everyone. Unfortunately, forensic analysis tools capable of detecting traces left by video processing operations in a blind fashion are still at their beginnings. One of the reasons is that videos are customary stored and distributed in a compressed format, and codec-related traces tends to mask previous processing operations. In this paper, we propose to capture video codec traces through convolutional neural networks (CNNs) and exploit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 21 publications
0
14
0
Order By: Relevance
“…In [44] the authors propose a recursive autoencoder (implemented with the LSTM architecture, to exploit temporal dependencies) to learn a feature representation of pristine videos and detect forgeries as outliers of the learned model. In [12] two CNNs are independently trained to extract codec and quality related features with the purpose of detecting temporal inconsistencies, showing that the combination of heterogeneous detectors enhances the overall performance. Some studies have also been addressed to expose the newly appeared threat of AIgenerated highly-realistic forgeries, also known as DeepFakes.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…In [44] the authors propose a recursive autoencoder (implemented with the LSTM architecture, to exploit temporal dependencies) to learn a feature representation of pristine videos and detect forgeries as outliers of the learned model. In [12] two CNNs are independently trained to extract codec and quality related features with the purpose of detecting temporal inconsistencies, showing that the combination of heterogeneous detectors enhances the overall performance. Some studies have also been addressed to expose the newly appeared threat of AIgenerated highly-realistic forgeries, also known as DeepFakes.…”
Section: Related Workmentioning
confidence: 99%
“…ROC curves for frame-wise temporal splicing localization. Comparison of codec-related, quality-related and combined features, for the proposed method and the baseline [12]. inconsistencies in the forged sequences: temporally-spliced videos have the first 100 frames encoded differently from the subsequent ones; spatially-spliced videos have a CIF window in the middle of the frame encoded differently from the rest.…”
Section: Test Datasetmentioning
confidence: 99%
See 3 more Smart Citations