Due to the outbreak of the COVID-19 pandemic, wearing masks in public areas has become an effective way to slow the spread of disease. However, it also brings some challenges to applications in daily life as half of the face is occluded. Therefore, the idea of removing masks by face inpainting appeared. Face inpainting has achieved promising performance but always fails to guarantee high-fidelity. In this paper, we present a novel mask removal inpainting network based on face attributes known in advance including nose, chubby, makeup, gender, mouth, beard and young, aiming to ensure the repaired face image is closer to ground truth. To achieve this, a dual pipeline network based on GANs has been proposed, one of which is a reconstructive path used in training that utilizes missing regions in ground truth to get prior distribution, while the other is a generative path for predicting information in the masked region. To establish the process of mask removal, we build a synthetic facial occlusion that mimics the real mask. Experiments show that our method not only generates faces more similarly aligned with real attributes, but also ensures semantic and structural rationality compared with state-of-the-art methods.
Panoramic images have been widely used in the diagnosis of dental diseases. In the process of panoramic image reconstruction, the position of the dental arch curve usually affects the quality of display content, especially the completion level of the panoramic image. In addition, the metal implants in the patient’s mouth often lead the contrast of the panoramic image to decrease. This paper describes a method to automatically synthesize panoramic images from dental cone beam computed tomography (CBCT) data. The proposed method has two essential features: the first feature is that the method can detect the dental arch curve through axial maximum intensity projection images over different ranges, and the second feature is that our method is able to adjust the intensity distribution of the implant in critical areas, to reduce the impact of the implant on the contrast of the panoramic image. The proposed method was tested on 50 CBCT datasets; the panoramic images generated by this method were compared with images attained from three other commonly used approaches and then subjectively scored by three experienced dentists. In the comprehensive image contrast score, the method in this paper has the highest score of 11.16 ± 2.64 points. The results show that the panoramic images generated by this method have better image contrast.
Following the traditional total variational denoising model in removing medical image noise with blurred image texture details, among other problems, an adaptive medical image fractional-order total variational denoising model with an improved sparrow search algorithm is proposed in this study. This algorithm combines the characteristics of fractional-order differential operators and total variational models. The model preserves the weak texture region of the image improvement based on the unique amplitude-frequency characteristics of the fractional-order differential operator. The order of the fractional-order differential operator is adaptively determined by the improved sparrow search algorithm using both the sine search strategy and the diversity variation processing strategy, which can greatly improve the denoising ability of the fractional-order differential operator. The experimental results reveal that the model not only achieves the adaptivity of fractional-order total variable differential order, but also can effectively remove noise, preserve the texture structure of the image to the maximum extent, and improve the peak signal-to-noise ratio of the image; it also displays favorable prospects for applications in medical image denoising.
As deep learning technology continues to evolve, the images yielded by generative models are becoming more and more realistic, triggering people to question the authenticity of images. Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training. This learning paradigm will result in efficiency and generalization issues, making detection methods always lag behind generation methods. This paper approaches the generated image detection problem from a new perspective: Start from real images. By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace. As a result, images from different generative models can be detected, solving some long-existing problems in the field. Experimental results show that although our method was trained only by real images and uses 99.9% less training data than other deep learning-based methods, it can compete with state-of-the-art methods and shows excellent performance in detecting emerging generative models with high inference efficiency. Moreover, the proposed method shows robustness against various post-processing. These advantages allow the method to be used in real-world scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.