In recent studies on sparse modeling, the nonconvex regularization approaches (particularly, regularization with ) have been demonstrated to possess capability of gaining much benefit in sparsity-inducing and efficiency. As compared with the convex regularization approaches (say, regularization), however, the convergence issue of the corresponding algorithms are more difficult to tackle. In this paper, we deal with this difficult issue for a specific but typical nonconvex regularization scheme, the regularization, which has been successfully used to many applications. More specifically, we study the convergence of the iterative half thresholding algorithm (the half algorithm for short), one of the most efficient and important algorithms for solution to the regularization. As the main result, we show that under certain conditions, the half algorithm converges to a local minimizer of the regularization, with an eventually linear convergence rate. The established result provides a theoretical guarantee for a wide range of applications of the half algorithm. We provide also a set of simulations to support the correctness of theoretical assertions and compare the time efficiency of the half algorithm with other known typical algorithms for regularization like the iteratively reweighted least squares (IRLS) algorithm and the iteratively reweighted minimization (IRL1) algorithm.Index Terms-Convergence, iterative half thresholding algorithm, regularization, nonconvex regularization.