2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00206
|View full text |Cite
|
Sign up to set email alerts
|

Training Adversarial Discriminators for Cross-Channel Abnormal Event Detection in Crowds

Abstract: Abnormal crowd behaviour detection attracts a large interest due to its importance in video surveillance scenarios. However, the ambiguity and the lack of sufficient abnormal ground truth data makes end-to-end training of large deep networks hard in this domain. In this paper we propose to use Generative Adversarial Nets (GANs), which are trained to generate only the normal distribution of the data. During the adversarial GAN training, a discriminator (D) is used as a supervisor for the generator network (G) a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
124
0
3

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 163 publications
(127 citation statements)
references
References 34 publications
0
124
0
3
Order By: Relevance
“…Therefore, researchers (e.g., in [10]) have usually utilized pre-trained networks to extract features from the scenes, and the decision is surrendered to another module. (2) To train end-to-end models for this task, just recently [11][12][13][14] used generative adversarial networks (GANs) and adopted unsupervised methods learning the positive class (i.e., irregular events). In these methods, two networks (i.e., generator and discriminator) are trained.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, researchers (e.g., in [10]) have usually utilized pre-trained networks to extract features from the scenes, and the decision is surrendered to another module. (2) To train end-to-end models for this task, just recently [11][12][13][14] used generative adversarial networks (GANs) and adopted unsupervised methods learning the positive class (i.e., irregular events). In these methods, two networks (i.e., generator and discriminator) are trained.…”
Section: Introductionmentioning
confidence: 99%
“…We use an adversarial training scheme, similar to those used in generative adversarial networks (GANs) [15]. But in contrast to previous GAN-based models (e.g., [11,13,14,16]), we show how the two networks (I and D) can help each other to conduct the ultimate task of visual irregularity detection and localization. The two networks can be efficiently learned against each other, where I tries to inpaint the image such that D does not detect the whole generated image as irregularity.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, GAN-based models employ different heuristics for the evaluation of novelty. For instance, in [38] a guided latent space search is exploited to infer it, whereas [35] directly queries the discriminator for a normality score.…”
Section: Related Workmentioning
confidence: 99%
“…In both cases, inspired by [17], [20], [21], our architecture is composed by two fully-convolutional networks: the conditional generator G and the conditional discriminator D. The G network is the U-Net architecture [20], which is an encoder-decoder following with skip connections helping to preserve important local information. For D the PatchGAN discriminator [20], [22] is proposed, which is based on a "small" fully-convolutional discriminator. Hierarchy of cross-modal GANs: As described in Sec.I, the assumption is that the distribution of the normality patterns is under a high degree of diversity.…”
Section: B Private Layer Of Self-awarenessmentioning
confidence: 99%