2020 25th International Conference on Pattern Recognition (ICPR) 2021
DOI: 10.1109/icpr48806.2021.9412109
|View full text |Cite
|
Sign up to set email alerts
|

Modeling the Distribution of Normal Data in Pre-Trained Deep Features for Anomaly Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
75
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 135 publications
(86 citation statements)
references
References 18 publications
1
75
0
Order By: Relevance
“…Similarly, [14] recently proposed PaDiM, which utilizes a locally constrained bag-of-features approach [8], estimating patch-level feature distribution moments (mean and covariance) for patch-level Mahalanobis distance measures [33]. This approach is similar to [40] studied on full images. To better account for the distribution shift between natural pretraining and industrial image data, subsequent adaptation can be done, e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, [14] recently proposed PaDiM, which utilizes a locally constrained bag-of-features approach [8], estimating patch-level feature distribution moments (mean and covariance) for patch-level Mahalanobis distance measures [33]. This approach is similar to [40] studied on full images. To better account for the distribution shift between natural pretraining and industrial image data, subsequent adaptation can be done, e.g.…”
Section: Related Workmentioning
confidence: 99%
“…where the below Theorem 1 shows the optimal W is the eigenvectors related to the k-smallest eigenvalues of C i,j . Notice that 1) the computational complexity of the equation is cubically reduced to O(HW k 3 ) set aside the cost of SVD, although which is the concern, 2) PCA embedding would fail to minimize approximation error since it uses the k-largest eigenvectors [14], and 3) near-zero eigenvalues may induce substantial anomaly scores. For the last, a previous work suggests to use C + I for the inverse to avoid a possible numerical problem [7], what we follow.…”
Section: Low-rank Approximation Of Precision Matrixmentioning
confidence: 99%
“…The anomaly score is in this case the distance between embedding vectors of a test image and reference vectors representing normality from the training dataset. The normal reference can be the center of a nsphere containing embeddings from normal images [4], [22], parameters of Gaussian distributions [23], [26] or the entire set of normal embedding vectors [5], [24]. The last option is used by SPADE [5] which has the best reported results for anomaly localization.…”
Section: Related Workmentioning
confidence: 99%
“…However, the normal class in PaDiM is described through a set of Gaussian distributions that also model correlations between semantic levels of the used pretrained CNN model. Inspired by [5], [23], we choose as pretrained networks a ResNet [27], a Wide-ResNet [28] or an EfficientNet [29]. Thanks to this modelisation, PaDiM outperforms the current state-of-the-art methods.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation