2021
DOI: 10.48550/arxiv.2103.10502
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Ano-Graph: Learning Normal Scene Contextual Graphs to Detect Video Anomalies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(13 citation statements)
references
References 0 publications
1
9
0
Order By: Relevance
“…The noises of intensity-based features can be reduced by extracting the features describing human skeletons and motion instead of pixels. By levering a novel structure, our approach achieves advantages over RNNs and improves understanding of the global scenes in contrast to the coarse-grained STGCNN [5], [70]. As expected, anomalous events about humans can be correctly detected in different scenes in the ShanghaiTech dataset, as shown in Fig.…”
Section: B Evaluation 1) Comparisons With Existing Methods On Accuracysupporting
confidence: 52%
See 1 more Smart Citation
“…The noises of intensity-based features can be reduced by extracting the features describing human skeletons and motion instead of pixels. By levering a novel structure, our approach achieves advantages over RNNs and improves understanding of the global scenes in contrast to the coarse-grained STGCNN [5], [70]. As expected, anomalous events about humans can be correctly detected in different scenes in the ShanghaiTech dataset, as shown in Fig.…”
Section: B Evaluation 1) Comparisons With Existing Methods On Accuracysupporting
confidence: 52%
“…Table I lists the comparison between the proposed HST-GCNN model and the latest state-of-the-art methods using ROC AUC. The ten methods are Frame-Pred [1], MPED-RNN [2], w/Mem [67], ST-GCAE [68], Multi-timescale [9], PoseCVAE [60], LSA [69], Ano-Graph [70], AnomalyNet [71], and Normal Graph [5], some of them integrate a model focusing on appearance and motion with others dealing with the trajectories of human skeletons. From experiments, we can conclude that HSTGCNN outperforms the ten methods mentioned above on four public datasets, including Human-Related (HR) and original datasets.…”
Section: B Evaluation 1) Comparisons With Existing Methods On Accuracymentioning
confidence: 99%
“…On the other hand, the latter methods explicitly combine the images or regions with their relationships as the contexts to understand and discover diverse image-level or region-level anomalies, such as video surveillance [ 1 , 2 ] and human monitoring [ 7 , 8 , 9 ]. Among such works, several approaches [ 9 , 15 , 16 , 27 , 28 ] consider the regions and their relations in the visual perspective for region anomaly detection, while our previous methods [ 7 , 8 ] additionally adopt deep-captioning models, such as DenseCap [ 17 ], to obtain region captions as the semantic information for the task. Sun et al [ 27 ] proposed a Spatio-Temporal Graph (STG) to represent spatio-temporal relations among objects to bridge the gap between an anomaly and its context.…”
Section: Related Workmentioning
confidence: 99%
“…Sun et al [ 27 ] proposed a Spatio-Temporal Graph (STG) to represent spatio-temporal relations among objects to bridge the gap between an anomaly and its context. Similarly, Ano-Graph [ 28 ] detects video anomalies by modeling spatio-temporal interactions among objects via self-supervised learning. Moreover, Spatial-Temporal Graph-based Convolutional Neural Networks (STGCNs) [ 13 ] construct a spatial similarity graph and a temporal consistency graph with a self-attention mechanism to model the correlations of video clips for video anomaly detection.…”
Section: Related Workmentioning
confidence: 99%
“…After obtaining this distance, we determine whether the video is normal or not by verifying that the distance is greater than some threshold. This approach can be viewed as a simplified version of [19,20]. As official implementations of these methods were not available during this study, we are unable to provide a comparison.…”
Section: Image Features-based Admentioning
confidence: 99%