Abstract:A reduction of the originally double step size iteration into the single step length scheme is derived under the proposed condition that relates two step lengths in the accelerated double step size gradient descent scheme. The proposed transformation is numerically tested. Obtained results confirm the substantial progress in comparison with the single step size accelerated gradient descent method defined in a classical way regarding all analyzed characteristics: number of iterations, CPU time, and number of fu… Show more
“…A common way to determine this parameter is through the features of the second-order Taylor's series taken on appropriate scheme (6). Acceleration parameters that were computed in such way are applied in the methods described in [1][2][3][4][5]. According to the iteration form (6), we can conclude that the accelerated gradient methods are of the quasi-Newton type in which the approximation of the Hessian, i.e., its inverse, is obtained by the scalar matrix , where is appropriate identity matrix and = ( , −1 ) is the matching acceleration parameter.…”
Section: Preliminaries: Accelerated Gradient Methods and Hybrid Iteramentioning
We present a hybridization of the accelerated gradient method with two vector directions. This hybridization is based on the usage of a chosen three-term hybrid model. Derived hybrid accelerated double direction model keeps preferable properties of both included methods. Convergence analysis demonstrates at least linear convergence of the proposed iterative scheme on the set of uniformly convex and strictly convex quadratic functions. The results of numerical experiments confirm better performance profile in favor of derived hybrid accelerated double direction model when compared to its forerunners.
“…A common way to determine this parameter is through the features of the second-order Taylor's series taken on appropriate scheme (6). Acceleration parameters that were computed in such way are applied in the methods described in [1][2][3][4][5]. According to the iteration form (6), we can conclude that the accelerated gradient methods are of the quasi-Newton type in which the approximation of the Hessian, i.e., its inverse, is obtained by the scalar matrix , where is appropriate identity matrix and = ( , −1 ) is the matching acceleration parameter.…”
Section: Preliminaries: Accelerated Gradient Methods and Hybrid Iteramentioning
We present a hybridization of the accelerated gradient method with two vector directions. This hybridization is based on the usage of a chosen three-term hybrid model. Derived hybrid accelerated double direction model keeps preferable properties of both included methods. Convergence analysis demonstrates at least linear convergence of the proposed iterative scheme on the set of uniformly convex and strictly convex quadratic functions. The results of numerical experiments confirm better performance profile in favor of derived hybrid accelerated double direction model when compared to its forerunners.
“…where η > 0 is a constant. The methods presented in [4,6,9,10] can be classified as methods of quasi-Newton type with accelerated approximation of the Hessian inverse, equipped with the line search technique. As in [9], we refer to these methods simply as accelerated gradient descent algorithms with line search.…”
Section: Some Conjugate Gradient Methods Calculate the Vector Directimentioning
confidence: 99%
“…This so-called acceleration parameter is calculated from the second-order Taylor series of the relevant iteration at two successive points. So, unlike the model (1.5) which defined the vector direction of conjugate gradient methods, in accelerated gradient descent methods [4,6,9,10] the vector direction is the product of the negative gradient vector and a derived accelerated parameter.…”
Section: Some Conjugate Gradient Methods Calculate the Vector Directimentioning
confidence: 99%
“…There are several conjugate and modified gradient descent algorithms [2,7,8,11], as well as accelerated gradient models [4,6,10], comparable to the HSM method and its modifications. We can make some comparisons between the various approaches in terms of the vector direction d k and the step-size value t k .…”
We improve the convergence properties of the iterative scheme for solving unconstrained optimisation problems introduced in Petrovic et al. [‘Hybridization of accelerated gradient descent method’, Numer. Algorithms (2017), doi:10.1007/s11075-017-0460-4] by optimising the value of the initial step length parameter in the backtracking line search procedure. We prove the validity of the algorithm and illustrate its advantages by numerical experiments and comparisons.
“…There are several iterative methods, each defined in a specific way, relevant for this work. Some of them are presented in articles (Andrei, 2006), (Stanimirović et al, 2010), (Petrović et al, 2014), , (Stanimirović et. al.…”
Underage costs are not easily quantifiable in spare parts management. These costs occur when a spare part is required and none are available in inventory. This paper provides another approach to underage cost optimization for subassemblies and assemblies in aviation industry. The quantity of spare parts is determined by using a method for airplane spare parts forecasting based on Rayleigh's model. Based on that, the underage cost per unit is determined by using the Newsvendor model. Then, by implementing a transformed accelerated double-step size gradient method, the underage costs for spare sub-assemblies and assemblies in airline industry are optimized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.