Various unsupervised anomaly detection methods using deep learning have recently been proposed, and the accuracy of the anomaly detection technique for local anomalies has been improved. However, no anomaly detection dataset includes co-occurrence-related anomalies, which are combinationrelated. Thus, the accuracy of anomaly detection for co-occurrence-related anomalies has not progressed. Therefore, we propose SA-PatchCore, which introduces self-attention to the state-of-the-art local anomaly detection model, PatchCore. It detects anomalies in co-occurrence relationships and anomalies in local areas with the benefit of the self-attention module, which can consider contexts between separated words introduced first in the natural language processing field. As no anomaly detection dataset includes anomalies in co-occurrence relation, we prepared a new dataset called the Co-occurrence Anomaly Detection Screw Dataset (CAD-SD). Furthermore, we performed experiments on anomaly detection using the new dataset. SA-PatchCore achieves high anomaly detection performance compared with PatchCore in the CAD-SD. Moreover, our proposed model shows almost the same anomaly detection performance as PatchCore in an MVTec Anomaly Detection dataset, which is composed of anomalies in a local area. As a contribution to the anomaly detection task, we have released the CAD-SD to the public. This dataset can be downloaded from the following link: https://github.com/IshidaKengo/Co-occurrence-Anomaly-Detection-Screw-Dataset INDEX TERMS Anomaly detection, deep learning, self-attention
One way to improve annotation efficiency is active learning. The goal of active learning is to select images from many unlabeled images, where labeling will improve the accuracy of the machine learning model the most. To select the most informative unlabeled images, conventional methods use deep neural networks with a large number of computation nodes and long computation time, but we propose a non-deep neural network method that does not require any additional training for unlabeled image selection. The proposed method trains a task model on labeled images, and then the model predicts unlabeled images. Based on this prediction, an uncertainty indicator is generated for each unlabeled image. Images with a high uncertainty index are considered to have a high information content, and are selected for annotation. Our proposed method is based on a very simple and powerful idea: select samples near the decision boundary of the model. Experimental results on multiple datasets show that the proposed method achieves higher accuracy than conventional active learning methods on multiple tasks and up to 14 times faster execution time from 1.2 × 106 s to 8.3 × 104 s. The proposed method outperforms the current SoTA method by 1% accuracy on CIFAR-10.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.