2019
DOI: 10.1609/aaai.v33i01.33015216
|View full text |Cite
|
Sign up to set email alerts
|

Robust Anomaly Detection in Videos Using Multilevel Representations

Abstract: Detecting anomalies in surveillance videos has long been an important but unsolved problem. In particular, many existing solutions are overly sensitive to (often ephemeral) visual artifacts in the raw video data, resulting in false positives and fragmented detection regions. To overcome such sensitivity and to capture true anomalies with semantic significance, one natural idea is to seek validation from abstract representations of the videos. This paper introduces a framework of robust anomaly detection using … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
57
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 83 publications
(57 citation statements)
references
References 24 publications
(38 reference statements)
0
57
0
Order By: Relevance
“…After obtaining W d and , two comparable low-dimensional feature vectors of each gene in the two networks are built and then compared to detect affected genes. Our approach of creating lowdimensional feature vectors is inspired by manifold alignment [7, 8] and its application [91]; our approach is referred to as quasi-manifold alignment because the adjacency matrices used here are not symmetric matrices while they are required to be symmetric in the original procedure. Here W d and serve as the inputs for manifold alignment and the outputs are the lowdimensional features and of genes before and after knocking out the target gene, where k << n .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…After obtaining W d and , two comparable low-dimensional feature vectors of each gene in the two networks are built and then compared to detect affected genes. Our approach of creating lowdimensional feature vectors is inspired by manifold alignment [7, 8] and its application [91]; our approach is referred to as quasi-manifold alignment because the adjacency matrices used here are not symmetric matrices while they are required to be symmetric in the original procedure. Here W d and serve as the inputs for manifold alignment and the outputs are the lowdimensional features and of genes before and after knocking out the target gene, where k << n .…”
Section: Methodsmentioning
confidence: 99%
“…The WT scGRN is copied and then converted to a pseudo-KO scGRN by artificially zeroing out the target gene in the adjacency matrix. A quasi-manifold alignment method [7, 8] is adapted to compare the two scGRNs (WT vs. pseudo-KO). Through comparing the two scGRNs, scTenifoldKnk predicts changes in transcriptional programs and assesses the impact of KO on the WT scGRN.…”
Section: Introductionmentioning
confidence: 99%
“…Particularly, when , it means the model has not the ability of distinguishing between class or the model is achieving the performance at randomize level. For quantitative evaluation of abnormality localization, we use the same pixel-level AUC and EER metrics reported in [ 19 ]. If the intersection between a detected box and the ground-truth box are smaller than 40% of the area of ground-truth box, the detected box are removed.…”
Section: Methodsmentioning
confidence: 99%
“…Most of state-of-the-art researches [ 4 , 5 , 6 ] have not reported quantitative performances for abnormality localization task but only qualitative analysis. Hence, we compare our quantitative results with recent methods of Vu et al [ 19 , 26 ] that applied the same evaluation metrics. We achieve significant improvements on both metrics.…”
Section: Methodsmentioning
confidence: 99%
“…Vu et al [3] presents a model of robust anomaly recognition utilizing multi-level portrayals of both intensity and movement information. The proposed multilevel detector shows a critical improvement in pixel-level Equal Error Rate, to be specific 11.35%, 12.32% and 4.31% improvement in UCSD Ped 1, UCSD Ped 2 and Avenue dataset individually.…”
Section: Related Workmentioning
confidence: 99%