2018
DOI: 10.1016/j.jvcir.2018.01.010
|View full text |Cite
|
Sign up to set email alerts
|

Image Splicing Localization using a Multi-task Fully Convolutional Network (MFCN)

Abstract: In this work, we propose a technique that utilizes a fully convolutional network (FCN) to localize image splicing attacks. We first evaluated a single-task FCN (SFCN) trained only on the surface label. Although the SFCN is shown to provide superior performance over existing methods, it still provides a coarse localization output in certain cases. Therefore, we propose the use of a multi-task FCN (MFCN) that utilizes two output branches for multi-task learning. One branch is used to learn the surface label, whi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
204
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 316 publications
(204 citation statements)
references
References 40 publications
0
204
0
Order By: Relevance
“…Cozzolino et al [9] treat this problem as an anomaly detection task and use an autoencoder based on extracted features to distinguish those regions that are difficult to reconstruct as tampered regions. Salloum et al [29] use a Fully Convolutional Network (FCN) framework to directly pre-dict the tampering mask given an image. They also learn a boundary mask to guide the FCN to look at tampered edges, which assists them in achieving better performance in various image manipulation datasets.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Cozzolino et al [9] treat this problem as an anomaly detection task and use an autoencoder based on extracted features to distinguish those regions that are difficult to reconstruct as tampered regions. Salloum et al [29] use a Fully Convolutional Network (FCN) framework to directly pre-dict the tampering mask given an image. They also learn a boundary mask to guide the FCN to look at tampered edges, which assists them in achieving better performance in various image manipulation datasets.…”
Section: Related Workmentioning
confidence: 99%
“…To fine-tune our model on these datasets, we extract the bounding box from the ground-truth mask. We compare with other approaches on the same training and testing split protocol as [2] (for NIST16 and COVER) and [29] (for Columbia and CASIA). See Table 2.…”
Section: Testing On Standard Datasetsmentioning
confidence: 99%
See 3 more Smart Citations