Skeleton-based action recognition using graph convolutional networks (GCNs), which specify CNNs to a more flexible non-Euclidean frame, has shown outstanding results. However, many problems remain the same in the earlier GCN-based models. (I) All model layers and input data have the same graph structure. Given the GCN model’s hierarchy and the variety of action recognition input, this may not be appropriate. (II) Bone length and orientation are rarely studied because they are too helpful and different for human action recognition. This paper Article Title presents an extended multi-stream adaptive graph convolutional neu-ral network (EMS-AAGCN) for skeleton-based action recognition. The proposed model’s network topology can be trained uniformly or individually based on the input data. This approach is based on data, allowing the model to make graphs more flexible and quickly adapt to various datasets. In addition, a spatial-temporal channel attention module in the suggested adaptive graph convolutional layer allows it to focus more on joints, frames, and features. Furthermore, an enhanced multi-stream framework models joints,bones and their motion, improving recognition accuracy. Our method significances the state-of-the-art on the two massive datasets, NTU-RGBD and Kinetics-Skeleton.