Videos taken from a single camera are a most common source of human motions. In this paper, we present a novel method to reconstruct the motion of a human-like figure from inter-frame feature correspondences of a single video stream. We exploit a motion library to resolve the depth ambiguity in recovering the 3D configurations from 2D features. Our reconstruction method takes three major steps: timewarping to align the reference motion with that in the video, reconstructing the joint orientations, and estimating the root trajectory. Experimental results show that our approach can reconstruct highly dynamic motions such as shooting of soccer players, which would be hard to do, otherwise.