Abstract-This paper presents a new model of scale, rotation, and translations invariant interest point descriptor for human actions recognition. The descriptor, HMIV (Hu Moment Invariants on Videos) is used for solving surveillance camera recording problems under different conditions of side, position, direction and illumination. The proposed approach deals with raw input human action video sequences. Seven Hu moments are computed for extracting human action features and for storing them in a 1D vector which is constringed as one mean value for all the frames' moments. The moments are invariant to scale, translation, or rotation, which is the robustness point of Hu moments algorithm. The experiments are evaluated using two different datasets; KTH and UCF101. The classification process is executed by calculating the Euclidean distance between the training and testing datasets. Human action with minimum distance will be selected as the winner matching action. The maximum classification accuracy in this work is 93.4% for KTH dataset and 92.11% for UCF101.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.