2020
DOI: 10.1016/j.inffus.2020.07.008
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting crowd behaviour analysis through deep learning: Taxonomy, anomaly detection, crowd emotions, datasets, opportunities and prospects

Abstract: Crowd behaviour analysis is an emerging research area. Due to its novelty, a proper taxonomy to organise its different sub-tasks is still missing. This paper proposes a taxonomic organisation of existing works following a pipeline, where sub-problems in last stages benefit from the results in previous ones. Models that employ Deep Learning to solve crowd anomaly detection, one of the proposed stages, are reviewed in depth, and the few works that address emotional aspects of crowds are outlined. The importance … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(27 citation statements)
references
References 90 publications
0
27
0
Order By: Relevance
“…The authors show the gradual improvement in CNN-based methods such as R-CNN, Fast R-CNN, Deconv-R-CNN, Improved Faster R-CNN, etc., along with other deep learning approaches. Sánchez et al [16] demonstrate a study on crowd behavior analysis or crowd anomaly detection by video processing using deep learning techniques. Authors listed convolutional RBM, Fast R-CNN, 3D CNN, PCANet, deep Gaussian Mixture Model, Convolutional AutoEncoder (CAE) with LSTM (CAE -LSTM), Spatio-temporal CNN, and GAN based approach.…”
Section: A Literature Surveymentioning
confidence: 99%
See 1 more Smart Citation
“…The authors show the gradual improvement in CNN-based methods such as R-CNN, Fast R-CNN, Deconv-R-CNN, Improved Faster R-CNN, etc., along with other deep learning approaches. Sánchez et al [16] demonstrate a study on crowd behavior analysis or crowd anomaly detection by video processing using deep learning techniques. Authors listed convolutional RBM, Fast R-CNN, 3D CNN, PCANet, deep Gaussian Mixture Model, Convolutional AutoEncoder (CAE) with LSTM (CAE -LSTM), Spatio-temporal CNN, and GAN based approach.…”
Section: A Literature Surveymentioning
confidence: 99%
“…We observed that a missing part is a combined review of various up-to-date video processing functionalities using deep learning techniques in related surveys. The surveys presented above focus only on specific functionality like video anomaly detection [5], abnormal human activity recognition [6], multi-object tracking [10], behavior analysis [16]. None of the surveys collate the research done on various functionality in one survey.…”
Section: B Motivationmentioning
confidence: 99%
“…Next, the two features are sent into the pedestrian detection network, respectively, and are fused through a full connection layer in the pedestrian detection network. On the basis of this fusion feature, the pedestrian detection and the prediction of the boundary box of the traveler are carried out by using the YOLO target algorithm through regression [28][29][30]. is pedestrian detection method is the third chapter of the pedestrian detection method based on feature fusion.…”
Section: Design Of Pedestrian Movement Path Detection Systemmentioning
confidence: 99%
“…Other works centered around student emotional state detection analyze and process signals from Electroencephalogram (EEG), Electromyogram (EMG), Electrocardiography (ECG), Electrodermal activity (EDA), heart rate variability, skin temperature, blood volume pulse, respiration, or Electrodermography (EDG)/galvanic skin response (GSR) [ 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 ]. Researchers [ 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 ] report the use of deep learning and machine learning (ML) techniques for emotion classification. Finally, other techniques rely on emotion recognition via computer vision [ 22 , 41 , 48 , 49 , 50 ], linguistic semantic approaches [ 51 ], and biological features [ 52 ].…”
Section: Introductionmentioning
confidence: 99%