2019
DOI: 10.48550/arxiv.1905.11692
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Direct Nonlinear Acceleration

Abstract: Optimization acceleration techniques such as momentum play a key role in stateof-the-art machine learning algorithms. Recently, generic vector sequence extrapolation techniques, such as regularized nonlinear acceleration (RNA) of Scieur et al. (Scieur et al., 2016), were proposed and shown to accelerate fixed point iterations. In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…Exploring the possibility of developing a new way to obtain a better guess of the gradient would be an interesting direction. One possibility is by considering a very recent work of [11] which proposes a new extrapolation algorithm.…”
Section: Discussionmentioning
confidence: 99%
“…Exploring the possibility of developing a new way to obtain a better guess of the gradient would be an interesting direction. One possibility is by considering a very recent work of [11] which proposes a new extrapolation algorithm.…”
Section: Discussionmentioning
confidence: 99%