In public spaces such as zoos and sports facilities, the presence of fences often annoys tourists and professional photographers. There is a demand for a post-processing tool to produce a non-occluded view from an image or video. This ''de-fencing'' task is divided into two stages: one to detect fence regions and the other to fill the missing part. For over a decade, various methods have been proposed for video-based de-fencing. However, only a few single-image-based methods are proposed. In this paper, we focus on single-image fence removal. Conventional approaches suffer from inaccurate and non-robust fence detection and inpainting due to less content information. To solve these problems, we combine novel methods based on a deep convolutional neural network (CNN) and classical domain knowledge in image processing. In the training process, we are required to obtain both fence images and corresponding non-fence ground truth images. Therefore, we synthesize natural fence images from real images. Moreover, spacial filtering processing (e.g. a Laplacian filter and a Gaussian filter) improves the performance of the CNN for detection and inpainting. Our proposed method can automatically detect a fence and generate a clean image without any user input. Experimental results demonstrate that our method is effective for a broad range of fence images.INDEX TERMS De-fencing, deep learning, image restoration, object removal, convolutional neural network.
Under severe weather conditions, outdoor images or videos captured by cameras can be affected by heavy rain and fog. For example, on a rainy day, autonomous vehicles have difficulty determining how to navigate due to the degraded visual quality of images. In this paper, we address a single-image rain removal problem (de-raining). As compared to video-based methods, single-image based methods are challenging because of the lack of temporal information. Although many existing methods have tackled these challenges, they suffer from overfitting, over-smoothing, and unnatural hue change. To solve these problems, we propose a GAN-based de-raining method. The optimal generator is determined by experimental comparisons. To train the generator, we learn the mapping between rainy and residual images from the training dataset. Besides, we synthesize a variety of rainy images to train our network. In particular, we focus on not only the orientations and scales of rain streaks but also the rainy image composite models. Our experimental results show that our method is suitable for a wide range of rainy images. Our method also achieves better performance on both synthetic and real-world images than state-of-the-art methods in terms of quantitative and visual performances. INDEX TERMS Generative adversarial network, single-image de-raining, deep learning, image restoration, residual learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.