Manufacturers are eager to replace the human inspector with automatic inspection systems to improve the competitive advantage by means of quality. However, some manufacturers have failed to apply the traditional vision system because of constraints in data acquisition and feature extraction. In this paper, we propose an inspection system based on deep learning for a tampon applicator producer that uses the applicator’s structural characteristics for data acquisition and uses state-of-the-art models for object detection and instance segmentation, YOLOv4 and YOLACT for feature extraction, respectively. During the on-site trial test, we experienced some False-Positive (FP) cases and found a possible Type I error. We used a data-centric approach to solve the problem by using two different data pre-processing methods, the Background Removal (BR) and Contrast Limited Adaptive Histogram Equalization (CLAHE). We have experimented with analyzing the effect of the methods on the inspection with the self-created dataset. We found that CLAHE increased Recall by 0.1 at the image level, and both CLAHE and BR improved Precision by 0.04–0.06 at the bounding box level. These results support that the data-centric approach might improve the detection rate. However, the data pre-processing techniques deteriorated the metrics used to measure the overall performance, such as F1-score and Average Precision (AP), even though we empirically confirmed that the malfunctions improved. With the detailed analysis of the result, we have found some cases that revealed the ambiguity of the decisions caused by the inconsistency in data annotation. Our research alerts AI practitioners that validating the model based only on the metrics may lead to a wrong conclusion.
The trend of multi-variety production is leading to a change in the product type of silk screen prints produced at short intervals. The types and locations of defects that usually occur in silk screen prints may vary greatly and thus, it is difficult for operators to conduct quality inspections for minuscule defects. In this paper, an improved U-Net++ is proposed based on patch splits for automated quality inspection of small or tiny defects, hereinafter referred to as ‘fine’ defects. The novelty of the method is that, to better handle defects within an image, patch level inputs are considered instead of using the original image as input. In the existing technique with the original image as input, artificial intelligence (AI) learning is not utilized efficiently, whereas our proposed method learns stably, and the Dice score was 0.728, which is approximately 10% higher than the existing method. The proposed model was applied to an actual silk screen printing process. All of the fine defects in products, such as silk screen prints, could be detected regardless of the product size. In addition, it was shown that quality inspection using the patch-split method-based AI is possible even in situations where there are few prior defective data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.