Under the hypothesis that an initial point is a quasi-regular point, we use a majorant condition to present a new semi-local convergence analysis of an extension of the Gauss-Newton method for solving convex composite optimization problems. In this analysis the conditions and proof of convergence are simplified by using a simple majorant condition to define regions where a Gauss-Newton sequence is "well behaved". AMSC: 47J15, 65H10.
We present a local convergence analysis of inexact Newton-like methods for solving nonlinear equations under majorant conditions. This analysis provides an estimate of the convergence radius and a clear relationship between the majorant function, which relaxes the Lipschitz continuity of the derivative, and the nonlinear operator under consideration. It also allow us to obtain some important special cases.
a b s t r a c tThe Gauss-Newton method for solving nonlinear least squares problems is studied in this paper. Under the hypothesis that the derivative of the function associated with the least square problem satisfies a majorant condition, a local convergence analysis is presented. This analysis allows us to obtain the optimal convergence radius and the biggest range for the uniqueness of stationary point, and to unify two previous and unrelated results.
In this paper, we present a local convergence analysis of inexact Gauss-Newton like methods for solving nonlinear least squares problems. Under the hypothesis that the derivative of the function associated with the least square problem satisfies a majorant condition, we obtain that the method is well-defined and converges. Our analysis provides a clear relationship between the majorant function and the function associated with the least square problem. It also allows us to obtain an estimate of convergence ball for inexact Gauss-Newton like methods and some important, special cases.
This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new variant is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity of the latter method. Our analysis is based on first presenting and establishing the pointwise iteration-complexity of a regularized non-Euclidean hybrid proximal extragradient framework whose error condition at each iteration includes both a relative error and a summable error. It is then shown that the new ADMM variant is a special instance of the latter framework where the sequence of summable errors is identically zero when the ADMM stepsize is less than one or a nontrivial sequence when the stepsize is in the interval [1, (1 + √ 5)/2).2000 Mathematics Subject Classification: 47H05, 47J22, 49M27, 90C25, 90C30, 90C60, 65K10.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.