Although action recognition is a widely studied field in computer vision, the recognitions of aggressive activities and crowd violence actions are comparatively less studied. Nowadays, so many surveillance cameras have been installed in the streets and there is a demand for intelligent crowd activity detection systems. A method for violence detection in videos is proposed. The primary contribution is a novel transfer learning-based violence detector that gives promising results compared with the existing detectors. First, the optical flows of the input videos are computed via Lucas-Kanade method. Then, several 2D templates are constructed with overlapping optical flow magnitudes and orientations. These templates are supplied to a pre-trained convolutional neural network as input and deep features of different layers are extracted. Cubic kernel support vector machine and subspace k-nearest neighbours classifiers are trained for prediction and the proposed method is tested with three different datasets that commonly used in violence detection studies.
This article is made publicly available in the institutional repository of Wageningen University and Research, under the terms of article 25fa of the Dutch Copyright Act, also known as the Amendment Taverne. This has been done with explicit consent by the author.Article 25fa states that the author of a short scientific work funded either wholly or partially by Dutch public funds is entitled to make that work publicly available for no consideration following a reasonable period of time after the work was first published, provided that clear reference is made to the source of the first publication of the work.This publication is distributed under The Association of Universities in the Netherlands (VSNU) 'Article 25fa implementation' project. In this project research outputs of researchers employed by Dutch Universities that comply with the legal requirements of Article 25fa of the Dutch Copyright Act are distributed online and free of cost or other barriers in institutional repositories. Research outputs are distributed six months after their first online publication in the original published version and with proper attribution to the source of the original publication.You are permitted to download and use the publication for personal purposes. All rights remain with the author(s) and / or copyright owner(s) of this work. Any use of the publication or parts of it other than authorised under article 25fa of the Dutch Copyright act is prohibited. Wageningen University & Research and the author(s) of this publication shall not be held responsible or liable for any damages resulting from your (re)use of this publication.
Human action recognition using depth sensors is an emerging technology especially in game console industry. Depth information can provide robust features about 3D environments and increase accuracy of action recognition in short ranges. This paper presents an approach to recognize basic human actions using depth information obtained from the Kinect sensor. To recognize actions, features extracted from angle and displacement information of joints are used. Actions are classified using support vector machines and random forest (RF) algorithm. The model is tested on HUN-3D, MSRC-12, and MSR Action 3D datasets with various testing approaches and obtained promising results especially with the RF algorithm. The proposed approach produces robust results independent from the dataset with simple and computationally cheap features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.