2017
DOI: 10.1007/s00138-017-0869-8
|View full text |Cite
|
Sign up to set email alerts
|

Fast and accurate detection and localization of abnormal behavior in crowded scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 22 publications
(28 citation statements)
references
References 56 publications
0
28
0
Order By: Relevance
“…deep networks has often been defined by minimizing the reconstruction error [34], such as in auto-encoders (AEs). AEs have shown to be great tools for unsupervised representation learning in a variety of tasks, including image inpainting [43], feature ranking [54], denosing [57], clustering [65], defense against adversarial examples [35], and anomaly detection [48,52]. Although AEs have led to farreaching success for data representation, there are some caveats associated with using reconstruction errors as the sole metric for representation learning: (1) As also argued in [58], it forces to reconstruct all parts of the input, even if they are irrelevant for any given task or are contaminated by noise; (2) It leads to a mechanism that entirely depends on single-point data abstraction, i.e., the AE learns to just reconstruct its input while neglecting other data points present in the dataset.…”
Section: Introductionmentioning
confidence: 99%
“…deep networks has often been defined by minimizing the reconstruction error [34], such as in auto-encoders (AEs). AEs have shown to be great tools for unsupervised representation learning in a variety of tasks, including image inpainting [43], feature ranking [54], denosing [57], clustering [65], defense against adversarial examples [35], and anomaly detection [48,52]. Although AEs have led to farreaching success for data representation, there are some caveats associated with using reconstruction errors as the sole metric for representation learning: (1) As also argued in [58], it forces to reconstruct all parts of the input, even if they are irrelevant for any given task or are contaminated by noise; (2) It leads to a mechanism that entirely depends on single-point data abstraction, i.e., the AE learns to just reconstruct its input while neglecting other data points present in the dataset.…”
Section: Introductionmentioning
confidence: 99%
“…For Ped 1, we compare our proposed approach both to traditional methods (SRC [6], MPPCA [43], and MDT [40]) and high-level deep learning-based methods (AVID [19], Sabokrou [8], and deep cascade [16]). As introduced and calculated in [26], evaluation metrics such as equal error rate (EER) and area under curve (AUC) are computed at frame level and compared to the state-of-the-art methods.…”
Section: Resultsmentioning
confidence: 99%
“…Reconstruction was done as a linear combination of dictionary bases which are representative of all normal samples. Dictionary can be learned offline through codebook generation or online through updating along with observing new normal samples [8].…”
Section: Literature Reviewmentioning
confidence: 99%
“…In both cases, a Gaussian model was constructed to represent the normal pattern, and new patches were marked as anomalous if both classifiers gave an anomalous response. Same authors refined their work in [60] , using the local descriptor as fast rejector of easy patches. When a new patch arrived, the local descriptor classified it.…”
Section: Deep Learning For Crowd Anomaly Detection: Approaches and Numentioning
confidence: 99%