2018
DOI: 10.1002/nla.2216
|View full text |Cite
|
Sign up to set email alerts
|

Objective acceleration for unconstrained optimization

Abstract: Acceleration schemes can dramatically improve existing optimization procedures. In most of the work on these schemes, such as nonlinear generalized minimal residual (N-GMRES), acceleration is based on minimizing the 2 norm of some target on subspaces of ℝ n . There are many numerical examples that show how accelerating general-purpose and domain-specific optimizers with N-GMRES results in large improvements. We propose a natural modification to N-GMRES, which significantly improves the performance in a testing… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 23 publications
(84 reference statements)
2
2
0
Order By: Relevance
“…As our approach tries to minimize the functional value directly, we call it as direct nonlinear acceleration (DNA). We also note that our formulation shares some similarities with (Riseth, 2019;Zhang et al, 2018). However, unlike (Riseth, 2019), we do not require line search and check a decrease condition at each step of our algorithm.…”
Section: Contributionssupporting
confidence: 58%
See 1 more Smart Citation
“…As our approach tries to minimize the functional value directly, we call it as direct nonlinear acceleration (DNA). We also note that our formulation shares some similarities with (Riseth, 2019;Zhang et al, 2018). However, unlike (Riseth, 2019), we do not require line search and check a decrease condition at each step of our algorithm.…”
Section: Contributionssupporting
confidence: 58%
“…We also note that our formulation shares some similarities with (Riseth, 2019;Zhang et al, 2018). However, unlike (Riseth, 2019), we do not require line search and check a decrease condition at each step of our algorithm. On the other hand, Zhang et al (Zhang et al, 2018) do not consider a direct acceleration scheme as they deal with a fixed-point problem.…”
Section: Contributionssupporting
confidence: 58%
“…For both O-ACCEL and QNAMM, we tested windows of 3, 5 and 10 past iterates to obtain a numerical approximation of the Hessian. We do not show other algorithms like the N-GMRES or L-BFGS which Riseth (2019) showed didn't perform as well.…”
Section: Alternating Least Squares For Tensor Rank Decompositionmentioning
confidence: 55%
“…For acceleration of general fixed-point iterations, the four problems selected are the expectation maximization (EM) for Poisson admixture, alternating least squares (ALS) for canonical tensor decomposition, the power method for computing dominant eigenvalues and the method of alternating projections (von Neumann (1950), Halperin (1962)) applied to regression with high-dimensional fixed effects. The performances of ACX is compared to competitive general purpose acceleration algorithms: the quasi-Newton acceleration of Zhou et al (2011), the objective acceleration approach of Riseth (2019) and the Anderson Acceleration version of Henderson and Varadhan (2019). For the high-dimensional fixed-effect regression, ACX is compared with equivalent packages in various programming languages.…”
Section: Introductionmentioning
confidence: 99%