Camera calibration is a process of estimating intrinsic parameters and extrinsic parameters of camera [1]. It makes the measurement of distances in a real world from their projections on the image plane possible [2]. Thus, With the continuous development of computer/machine vision, camera calibration is believed to be widely applied in 3D reconstruction [3,4], structure from motion [5], object tracking [6-8] and gesture recognition [9,10], etc. With the development of computer vision, more and more cameras that can acquire 3D information have been proposed, such as stereo cameras and Time of Flight (TOF) cameras. On 4 November 2010, With the launch of low-cost Microsoft Kinect sensors, 3D depth cameras are increasingly attracting researchers due to their versatile applications in computer vision [11]. Kinect was originally developed to improve the game player's experience, enhance human-computer interaction, it actually an RGB-D sensor which provides Synchronized RGB color and depth images. The image capture device of the Kinect include a color camera and a depth sensor which consists of the infrared (IR) projector combined with the IR camera. The experimental results show that Kinect was more accurate than the TOF depth sensor, and close to a medium-resolution stereo camera. However, it is well known that these parameters vary from device to device, and that the factory presets are not accurate enough for many applications [12]. To deal with the above issue, N. Burrus [13] obtained basic Kinect calibration algorithms by using