Human interaction recognition technology is a hot topic in the field of computer vision, and its application prospects are very extensive. At present, there are many difficulties in human interaction recognition such as the spatial complexity of human interaction, the differences in action characteristics at different time periods, and the complexity of interactive action features. The existence of these problems restricts the improvement of recognition accuracy. To investigate the differences in the action characteristics at different time periods, we propose an improved fusion time-phase feature of the Gaussian model to obtain video keyframes and remove the influence of a large amount of redundant information. Regarding the complexity of interactive action features, we propose a multi-feature fusion network algorithm based on parallel Inception and ResNet. This multi-feature fusion network not only reduces the network parameter quantity, but also improves the network performance; it alleviates the network degradation caused by the increase in network depth and obtains higher classification accuracy. For the spatial complexity of human interaction, we combined the whole video features with the individual video features, making full use of the feature information of the interactive video. A human interaction recognition algorithm based on whole–individual detection is proposed, where the whole video contains the global features of both sides of action, and the individual video contains the individual detail features of a single person. Making full use of the feature information of the whole video and individual videos is the main contribution of this paper to the field of human interaction recognition and the experimental results in the UT dataset (UT–interaction dataset) showed that the accuracy of this method was 91.7%.
Human activity recognition is a key technology in intelligent video surveillance and an important research direction in the field of computer vision. However, the complexity of human interaction features and the differences in motion characteristics at different time periods have always existed. In this paper, a human interaction recognition algorithm based on parallel multi-feature fusion network is proposed. First of all, in view of the different amount of information provided by the different time periods of action, an improved time-phased video down sampling method based on Gaussian model is proposed. Second, the Inception module uses different scale convolution kernels for feature extraction. It can improve network performance and reduce the amount of network parameters at the same time. The ResNet module mitigates degradation problem due to increased depth of neural networks and achieves higher classification accuracy. The amount of information provided in the motion video in different stages of motion time is also different. Therefore, we combine the advantages of the Inception network and ResNet to extract feature information, and then we integrate the extracted features. After the extracted features are merged, the training is continued to realize parallel connection of the multi-feature neural network. In this paper, experiments are carried out on the UT dataset. Compared with the traditional activity recognition algorithm, this method can accomplish the recognition tasks of six kinds of interactive actions in a better way, and its accuracy rate reaches 88.9%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.