2019
DOI: 10.1109/tnnls.2019.2933554
|View full text |Cite
|
Sign up to set email alerts
|

A Deep One-Class Neural Network for Anomalous Event Detection in Complex Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
89
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 112 publications
(94 citation statements)
references
References 46 publications
0
89
0
Order By: Relevance
“…Deep one-class classification methods aim to overcome these challenges by learning useful neural network feature maps φω : X → Z from the data or transferring such networks from related tasks. Deep SVDD [137], [144], [145], [327] and deep OC-SVM variants [136], [328] (16) and 20, respectively. These methods are typically optimized with SGD variants [329]- [331], which, together with GPU parallelization, makes them scale to large data sets.…”
Section: Deep One-class Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep one-class classification methods aim to overcome these challenges by learning useful neural network feature maps φω : X → Z from the data or transferring such networks from related tasks. Deep SVDD [137], [144], [145], [327] and deep OC-SVM variants [136], [328] (16) and 20, respectively. These methods are typically optimized with SGD variants [329]- [331], which, together with GPU parallelization, makes them scale to large data sets.…”
Section: Deep One-class Classificationmentioning
confidence: 99%
“…A recurring question in deep one-class classification is how to meaningfully regularize against a feature map collapse φω ≡ c. Without regularization, minimum volume or maximum margin objectives, such as (16), (20), or (22), could be trivially solved with a constant mapping [137], [333]. Possible solutions for this include adding a reconstruction term or architectural constraints [137], [327], freezing the embedding [136], [139], [140], [142], [334], inversely penalizing the embedding variance [335], using true [144], [336], auxiliary [139], [233], [332], [337], or artificial [337] negative examples in training, pseudolabeling [152], [153], [155], [335], or integrating some manifold assumption [333]. Further variants of deep one-class classification include multimodal [145] or time-series extensions [338] and methods that employ adversarial learning [138], [141], [339] or transfer learning [139], [142].…”
Section: Deep One-class Classificationmentioning
confidence: 99%
“…Figure 7 shows examples of the pixel‐level detection results. The proposed method is compared with SCL, 30 semi‐supervised learning, 11 one‐class classification 17 and other PU learning 20 methods. It is inferred from Figure 7 that our method reaches successfully detects anomalies and can locate the anomalies more accurately than other methods.…”
Section: Resultsmentioning
confidence: 99%
“…(C) Detection results of TSR‐AE 11 . (D) Detection results of DOCNN 17 . (E) Detection results of RPNA 20 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation