The challenge of estimating 3D pose has been thoroughly explored and researched in computer vision due to its broad range of applications. However, due to complex structures, occlusion, frame rates, varying sizes and resolutions, this problem is highly challenging. This paper demonstrates the ability of graph neural networks (GNNs) and long short-term memory (LSTM) for 2D to 3D pose estimation in a sequence of frames. LSTM is used as a feature extractor. The bone length and joint angle play a crucial role in pose estimation. Our model uses a GNN to analyze the relationships between nearby joints and their angles and then predicts the final 3D joint positions using a multi-layer perceptron (MLP) model. A Semi-supervised learning and frame-dropping strategy approach is employed to enhance the accuracy of our model. Our model outperforms various latest advancements models, achieving an accuracy of 6.5 mm in joint localization on the HumanEva-1 and Human3.6m datasets.