Previously visible watermark removal algorithms required the location of known watermarks. A corresponding removal algorithm is then proposed based on the location and the feature of the watermark. If the location of the watermark is random or the watermark has different angles, the watermark removal algorithm will encounter problems. The authors recommend a visible watermark removal algorithm based on generative adversarial networks (GANs) and self‐attention mechanisms. During the training, the authors introduce a GANs model to build mappings between watermarked images and real images. The authors observe that the feature of the watermarked region in different watermarked images is invariant in nature, and the other regions are changed. The self‐attention layer will automatically focus on this invariant feature. Experiments on two public datasets prove that the authors’ model has gained excellent performance. Compared with the other four most competitive watermark removal models, the authors improve the watermark removal rate indicator from 17 to 92%. For the other four evaluation indicators, the authors have improved performance by up to 20%.
The conventional masked image restoration algorithms all utilise the correlation between the masked region and its neighbouring pixels, which does not work well for the larger masked image. The latest research utilises Generative Adversarial Networks (GANs) model to generate a better result for the larger masked image but does not work well for the complex masked region. To get a better result for the complex masked region, the authors propose a novel fast GANs model for masked image restoration. The method used in authors' research is based on GANs model and fast marching method (FMM). The authors trained an FMMGAN model which consists of a neighbouring network, a generator network, a discriminator network, and two parsing networks. A large number of experimental results on two open datasets show that the proposed model performs well for masked image restoration.
With the rapid development of image editing techniques, the image splicing behavior, typically for those that involve copying a portion from one original image into another targeted image, has become one of the most prevalent challenges in our society. The existing algorithms relying on hand-crafted features can be used to detect image splicing but unfortunately lack precise location information of the tampered region. On the basis of changing the classifications of fully convolutional network (FCN), here we proposed an improved FCN that enables locating the spliced region. Specifically, we first insert the original images into the training dataset that contains tampered images forming positive and negative samples and then set the ground truth masks of the original images to be black images. The purpose of forming positive and negative samples is to guide the improved FCN to distinguish the differences between the original images and spliced images. After these steps, we conducted an experiment to verify our proposal, and the results reveal that the improved FCN really can locate the spliced region. In addition, the improved FCN achieves improved performance compared to the already-existing algorithms, thereby providing a feasible approach for digital image region forgery detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.