Camera calibration is one of the key tasks in the field of computer vision, finding extensive applications in various domains, including photogrammetry, 3D reconstruction, augmented reality, and autonomous driving. The Direct Linear Transform (DLT) algorithm, a classical approach for camera calibration, estimates camera parameters by solving a system of linear equations. However, traditional DLT methods may face accuracy and stability issues when dealing with noise, distortion, and nonlinear effects.To address these limitations, this paper introduces a camera calibration method based on the Improved DLT algorithm. This method incorporates distortion models into the traditional DLT algorithm and utilizes Levenberg-Marquardt (LM) optimization techniques to enhance calibration accuracy and stability. The key steps involve data preparation, DLT estimation, and nonlinear optimization.Experimental results demonstrate that the Improved DLT algorithm outperforms traditional DLT methods, particularly in cameras with significant distortion and wide field of view. It exhibits smaller reprojection errors and achieves a more uniform error distribution, especially at image edges. This research contributes by providing a more accurate and robust camera calibration method, offering valuable tools for computer vision applications, and advancing the development and application of computer vision technology.