Deep learning has been widely applied into image inpainting. However, traditional image processing methods (i.e., patch‐based and diffusion‐based methods) generally fail to produce visually natural contents and semantically reasonable structures due to ineffectively processing the high‐level semantic information of images. To solve the problem, we propose a stacked generator networks assisted by patch discriminator for image inpainting by multistage. In the proposed method, our generator network mainly consists of three‐layer stacked encoder‐decoder architecture, which could fuse different level feature information and achieve image inpainting via a coarse‐to‐fine hierarchical representation. Meanwhile, we split the masked image into different patches in each layer, which could effectively enlarge the receptive field and extract more useful features of images. Moreover, the patch discriminator is introduced to judge the patches of inpainting image are real or fake. In this way, our network can effectively utilize the semantic information to complete a fine result. Furthermore, both perceptual loss and style loss are used to improve the inpainting results in verse. Experimental results on Places2 and Paris StreetView illustrate that our approach could generate high‐quality inpainting results, and our method is more effective than the existing image inpainting methods.
The Coronavirus Disease 2019 (COVID‐19) epidemic has constituted a Public Health Emergency of International Concern. Chest computed tomography (CT) can help early reveal abnormalities indicative of lung disease. Thus, accurate and automatic localisation of lung lesions is particularly important to assist physicians in rapid diagnosis of COVID‐19 patients. The authors propose a classifier‐augmented generative adversarial network framework for weakly supervised COVID‐19 lung lesion localisation. It consists of an abnormality map generator, discriminator and classifier. The generator aims to produce the abnormality feature map M to locate lesion regions and then constructs images of the pseudo‐healthy subjects by adding M to the input patient images. Besides constraining the generated images of healthy subjects with real distribution by the discriminator, a pre‐trained classifier is introduced to enhance the generated images of healthy subjects to possess similar feature representations with real healthy people in terms of high‐level semantic features. Moreover, an attention gate is employed in the generator to reduce the noise effect in the irrelevant regions of M. Experimental results on the COVID‐19 CT dataset show that the method is effective in capturing more lesion areas and generating less noise in unrelated areas, and it has significant advantages in terms of quantitative and qualitative results over existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.