This paper proposes a framework for human action recognition (HAR) by using skeletal features from depth video sequences. HAR has become a basis for applications such as health care, fall detection, human position tracking, video analysis, security applications, etc. We have used joint angle quaternion and absolute joint position to recognition human action. We also mapped joint position on (3) Lie algebra and fuse it with other features. This approach comprised of three steps namely (i) an automatic skeletal feature (absolute joint position and joint angle) extraction (ii) HAR by using multi-class Support Vector Machine and (iii) HAR by features fusion and decision fusion classification outcomes. The HAR methods are evaluated on two publicly available challenging datasets UTKinect-Action and Florence3D-Action datasets. The experimental results show that the absolute joint position feature is the best than other features and the proposed framework being highly promising compared to others existing methods.