This paper proposes a novel method for sports video scene classification with the particular intention of video summarization. Creating and publishing a shorter version of the video is more interesting than a full version due to instant entertainment. Generating shorter summaries of the videos is a tedious task that requires significant labor hours and unnecessary machine occupation. Due to the growing demand for video summarization in marketing, advertising agencies, awareness videos, documentaries, and other interest groups, researchers are continuously proposing automation frameworks and novel schemes. Since the scene classification is a fundamental component of video summarization and video analysis, the quality of scene classification is particularly important. This article focuses on various practical implementation gaps over the existing techniques and presents a method to achieve high-quality of scene classification. We consider cricket as a case study and classify five scene categories, i.e., batting, bowling, boundary, crowd and close-up. We employ our model using pre-trained AlexNet Convolutional Neural Network (CNN) for scene classification. The proposed method employs new, fully connected layers in an encoder fashion. We employ data augmentation to achieve a high accuracy of 99.26% over a smaller dataset. We conduct a performance comparison against baseline approaches to prove the superiority of the method as well as state-of-the-art models. We evaluate our performance results on cricket videos and compare various deep-learning models, i.e., Inception V3, Visual Geometry Group (VGGNet16, VGGNet19) , Residual Network (ResNet50), and AlexNet. Our experiments demonstrate that our method with AlexNet CNN produces better results than existing proposals.
Image data contain spatial information only, thus making two-dimensional (2D) Convolutional Neural Networks (CNN) ideal for solving image classification problems. On the other hand, video data contain both spatial and temporal information that must be simultaneously analyzed to solve action recognition problems. 3D CNNs are successfully used for these tasks, but they suffer from their extensive inherent parameter set. Increasing the network's depth, as is common among 2D CNNs, and hence increasing the number of trainable parameters does not provide a good trade-off between accuracy and complexity of the 3D CNN. In this work, we propose Pooling Block (PB) as an enhanced pooling operation for optimizing action recognition by 3D CNNs. PB comprises three kernels of different sizes. The three kernels simultaneously sub-sample feature maps, and the outputs are concatenated into a single output vector. We compare our approach with three benchmark 3D CNNs (C3D, I3D, and Asymmetric 3D CNN) and three datasets (HMDB51, UCF101, and Kinetics 400). Our PB method yields significant improvement in 3D CNN performance with a comparatively small increase in the number of trainable parameters. We further investigate (1) the effect of video frame dimension and (2) the effect of the number of video frames on the performance of 3D CNNs using C3D as the benchmark.INDEX TERMS Action recognition, convolutional neural network, optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.