With increasing interest in hairstyles and hair color, bleaching, dyeing, straightening, and curling hair is being widely used worldwide, and the chemical and physical treatment of hair is also increasing. As a result, hair has suffered a lot of damage, and the degree of damage to hair has been measured only by the naked eye or touch. This has led to serious consequences, such as hair damage and scalp diseases. However, although these problems are serious, there is little research on hair damage. With the advancement of technology, people began to be interested in preventing and reversing hair damage. Manual observation methods cannot accurately and quickly identify hair damage areas. In recent years, with the rise of artificial intelligence technology, a large number of applications in various scenarios have given researchers new methods. In the project, we created a new hair damage data set based on SEM (scanning electron microscope) images. Through various physical and chemical analyses, we observe the changes in the hair surface according to the degree of hair damage, found the relationship between them, used a convolutional neural network to recognize and confirm the degree of hair damage, and categorized the degree of damage into weak damage, moderate damage and high damage.
Traditional GAN-based image generation networks cannot accurately and naturally fuse surrounding features in local image generation tasks, especially in hairstyle generation tasks. To this end, we propose a novel transformer-based GAN for new hairstyle generation networks. The network framework comprises two modules: Face segmentation (F) and Transformer Generative Hairstyle (TGH) modules. The F module is used for the detection of facial and hairstyle features and the extraction of global feature masks and facial feature maps. In the TGH module, we design a transformer-based GAN to generate hairstyles and fix the details of the fusion part of faces and hairstyles in the new hairstyle generation process. To verify the effectiveness of our model, CelebA-HQ (Large-scale CelebFaces Attribute) and FFHQ (Flickr-Faces-HQ) are adopted to train and test our proposed model. In the image evaluation test used, FID, PSNR, and SSIM image evaluation methods are used to test our model and compare it with other excellent image generation networks. Our proposed model is more robust in terms of test scores and real image generation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.