When performing fabric defect detection, ground truth is required for training with supervisedlearning, more steps are required for training with unsupervised learning, and background noise isgenerated during the training process. To solve the above problems, we propose the fabric defectdetection model with unsupervised direct defect residual image generation (UDDGAN). The gener-ative adversarial network model architecture is used in the main body of the model, and we designthe patch structure such that the defect residual images can be generated directly. We use a gen-erator with block blocks and a double discriminator to make the generated image closer to thetarget image. We incorporate similar image loss when training the generator to minimize the gen-erated background noise, which ensures the accuracy of the detection results. We achieve betterresults on a benchmark dataset for fabric defect detection at Zhejiang University and compare itwith six methods. The experimental results show that our method works well on a variety of metrics.
Among the fabric defect detection methods, unsupervised methods are based on the principle of training a network to restore a fabric image with defects to a flawless image with a consistent background and no visible defects, and to obtain specific information about the defects by comparing the two images for defect detection. However, most of the restored images cannot remove the defective part completely, and the more obvious defects are still visible. To solve the above problem, this paper proposes the jump connection generative adversarial network for fabric defect detection (JCGAN). JCGAN uses a jump connection structure to improve the ability of the network to extract detail information by introducing the detail information from downsampling into the upsampling process. It introduces a low-dimensional loss function to control the training of the network to improve the quality of the generated images. It uses two detection algorithms (SSIM computational detection algorithm, multi-channel defect detection algorithm) to compute the disparity grayscale information of the defected and recovered images separately, and finally fuses the information from both sides to obtain more detailed detection results. Compared with the six commonly used methods, the f-value of JCGAN is improved by 13.40\% on average compared with other methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.