“…GHOG+GIST feature descriptor is a fusion of gradient and texture feature descriptor, the method used scale, magnitude and orientation and gives adequate result but compare to our proposed method GHOG+GIST features vector dimension is substantial and required large time computation cost. [35] 87.8 -HOF [35] 83.5 -HNF [35] 87.5 -LTP [35] 71.90±0.49 -ViF [23] 81.60±0.22 88.01 OViF [7] 84.20±3.33 90.32 ViF+OViF [7] 86.30±1.57 91.93 DiMOLIF [34] 88.6±1.2 93.23 GHOG+GIST [10] AUC HOG [35] 57.43±0.37 61.82 HOF [35] 58.53±0.32 57.60 HNF [35] 56.52±0.31 59.94 LTP [35] 71.53±0.17 79.86 ViF [23] 81.20±1.79 88.04 OViF [7] 76.80±3.90 80.47 ViF+OViF [7] 86.00±1.41 91.82 DiMOLIF [34] 85.83±4.2 89.25 GHOG+GIST [10] 88.86±5. In experimentation, our proposed features descriptor performed on an 8GB RAM, Intel core i7 computer running Windows 10.…”
Section: Results and Analysismentioning
confidence: 99%
“…In this section, we have presented our proposed texturebased features extraction technique performance. Five-fold cross-validation technique [7,10,34] have been used for experimentation. Consequently, for each one of the two datasets is separated into five halves.…”
Section: Experimentation Settingmentioning
confidence: 99%
“…The fusion of three features called the MoBSIFT descriptor used to detect violent and non-violent events. Recently, Lohithashva et al [10] introduced gradient and texture-based feature descriptors. The fusion features descriptor extracted prominent features and fed to support vector machine (SVM) classifier to detect crowded and uncrowded violent event scenes in the video.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, some of the surveys [11][12][13] have published different feature extraction techniques used to detect violent events in videos. The most existing methods based on the spatio-temporal interest points [8], features fusion [9,10], optical flow [14], textures [15,16], trajectories [17], descriptors and deep learning techniques [18]. Some of the researchers have also been focusing on effective segmentation [19], subspace techniques [20,21], and classifiers [1,10,22] used to detect violent events.…”
Section: Introductionmentioning
confidence: 99%
“…The most existing methods based on the spatio-temporal interest points [8], features fusion [9,10], optical flow [14], textures [15,16], trajectories [17], descriptors and deep learning techniques [18]. Some of the researchers have also been focusing on effective segmentation [19], subspace techniques [20,21], and classifiers [1,10,22] used to detect violent events. Although, they are facing difficulty due to complex background, illuminations, scale variation, slow motion in video surveillance.…”
Violent event detection is an interesting research problem and it is a branch of action recognition and computer vision. The detection of violent events is significant for both the public and private sectors. The automatic surveillance system is more attractive and interesting because of its wide range of applications in abnormal event detection. Since many years researchers were worked on violent activity detection and they have proposed different feature descriptors on both vision and acoustic technology. Challenges still exist due to illumination, complex background, scale changes, sudden variation, and slowmotion in videos. Consequently, violent event detection is based on the texture features of the frames in both crowded and uncrowned scenarios. Our proposed method used Local Binary Pattern (LBP) and GLCM (Gray Level Co-occurrence Matrix) as feature descriptors for the detection of a violent event. Finally, prominent features are used with five different supervised classifiers. The proposed feature extraction technique used Hockey Fight (HF) and Violent Flows (VF) two standard benchmark datasets for the experimentation.
“…GHOG+GIST feature descriptor is a fusion of gradient and texture feature descriptor, the method used scale, magnitude and orientation and gives adequate result but compare to our proposed method GHOG+GIST features vector dimension is substantial and required large time computation cost. [35] 87.8 -HOF [35] 83.5 -HNF [35] 87.5 -LTP [35] 71.90±0.49 -ViF [23] 81.60±0.22 88.01 OViF [7] 84.20±3.33 90.32 ViF+OViF [7] 86.30±1.57 91.93 DiMOLIF [34] 88.6±1.2 93.23 GHOG+GIST [10] AUC HOG [35] 57.43±0.37 61.82 HOF [35] 58.53±0.32 57.60 HNF [35] 56.52±0.31 59.94 LTP [35] 71.53±0.17 79.86 ViF [23] 81.20±1.79 88.04 OViF [7] 76.80±3.90 80.47 ViF+OViF [7] 86.00±1.41 91.82 DiMOLIF [34] 85.83±4.2 89.25 GHOG+GIST [10] 88.86±5. In experimentation, our proposed features descriptor performed on an 8GB RAM, Intel core i7 computer running Windows 10.…”
Section: Results and Analysismentioning
confidence: 99%
“…In this section, we have presented our proposed texturebased features extraction technique performance. Five-fold cross-validation technique [7,10,34] have been used for experimentation. Consequently, for each one of the two datasets is separated into five halves.…”
Section: Experimentation Settingmentioning
confidence: 99%
“…The fusion of three features called the MoBSIFT descriptor used to detect violent and non-violent events. Recently, Lohithashva et al [10] introduced gradient and texture-based feature descriptors. The fusion features descriptor extracted prominent features and fed to support vector machine (SVM) classifier to detect crowded and uncrowded violent event scenes in the video.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, some of the surveys [11][12][13] have published different feature extraction techniques used to detect violent events in videos. The most existing methods based on the spatio-temporal interest points [8], features fusion [9,10], optical flow [14], textures [15,16], trajectories [17], descriptors and deep learning techniques [18]. Some of the researchers have also been focusing on effective segmentation [19], subspace techniques [20,21], and classifiers [1,10,22] used to detect violent events.…”
Section: Introductionmentioning
confidence: 99%
“…The most existing methods based on the spatio-temporal interest points [8], features fusion [9,10], optical flow [14], textures [15,16], trajectories [17], descriptors and deep learning techniques [18]. Some of the researchers have also been focusing on effective segmentation [19], subspace techniques [20,21], and classifiers [1,10,22] used to detect violent events. Although, they are facing difficulty due to complex background, illuminations, scale variation, slow motion in video surveillance.…”
Violent event detection is an interesting research problem and it is a branch of action recognition and computer vision. The detection of violent events is significant for both the public and private sectors. The automatic surveillance system is more attractive and interesting because of its wide range of applications in abnormal event detection. Since many years researchers were worked on violent activity detection and they have proposed different feature descriptors on both vision and acoustic technology. Challenges still exist due to illumination, complex background, scale changes, sudden variation, and slowmotion in videos. Consequently, violent event detection is based on the texture features of the frames in both crowded and uncrowned scenarios. Our proposed method used Local Binary Pattern (LBP) and GLCM (Gray Level Co-occurrence Matrix) as feature descriptors for the detection of a violent event. Finally, prominent features are used with five different supervised classifiers. The proposed feature extraction technique used Hockey Fight (HF) and Violent Flows (VF) two standard benchmark datasets for the experimentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.