Skeleton-based action recognition has advanced significantly in the past decade. Among deep learning-based action recognition methods, one of the most commonly used structures is a two-stream network. This type of network extracts high-level spatial and temporal features from skeleton coordinates and optical flows, respectively. However, other features, such as the structure of the skeleton or the relations of specific joint pairs, are sometimes ignored, even though using these features can also improve action recognition performance. To robustly learn more low-level skeleton features, this paper introduces an efficient fully convolutional network to process multiple input features. The network has multiple streams, each of which has the same encoder-decoder structure. A temporal convolutional network and a co-occurrence convolutional network encode the local and global features, and a convolutional classifier decodes high-level features to classify the action. Moreover, a novel fusion strategy is proposed to combine independent feature learning and dependent feature relating. Detailed ablation studies are performed to confirm the network's robustness to all feature inputs. If more features are combined and the number of streams increases, performance can be further improved. The proposed network is evaluated on three skeleton datasets: NTU-RGB + D, Kinetics, and UTKinect. The experimental results show its effectiveness and performance superiority over state-of-the-art methods.
Skeleton-based action recognition has attracted extensive attention recently in the computer vision community. Previous studies, especially GCN-based methods, have presented remarkable improvements for this task. However, in existing GCN-based methods, global average pooling is applied to the extracted features before the classifier. This may hurt the recognition performance since it neglects the fact that not all features are equally important in the temporal dimension. To tackle this issue, in this article, we propose a feature selection network (FSN) with actor-critic reinforcement learning. Given the extracted feature sequence, FSN learns to adaptively select the most representative features and discard the ambiguous features for action recognition. In addition, conventional graph convolution is a local operation, it cannot fully capture the non-local joint dependencies that could be vital to recognize the action. Thus, we also propose a generalized graph generation module to capture latent dependencies and further propose a generalized graph convolution network (GGCN). The GGCN and FSN are combined in a three-stream recognition framework, in which different types of information from skeleton data are further fused to improve the recognition accuracy. Extensive experiments demonstrate that the proposed FSN is a flexible and effective module that can cooperate with any existing GCN-based framework to enhance the recognition accuracy, the proposed GGCN can extract richer skeleton features for skeleton-based action recognition, and our method achieves superior performance over several public datasets, e.g. 95.7 top-1 accuracy on NTU-RGB+D, 86.7 top-1 accuracy on NTU-RGB+D 120, etc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.