2014
DOI: 10.1109/tnnls.2013.2294741
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Learning via Optimizing the Variational Inequalities

Abstract: A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochast… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…Based on its equivalent reformulation, Nesterov's extrapolation can be employed in MD to achieve optimal individual convergence. Obviously, it is interesting that whether we can extend Nesterov's extrapolation to other optimization methods such as alternating direction method of multiplier (ADMM) [28] and preconditioned SGD [23], and its application like that in [32] is also expected. All these issues will be considered in our future work.…”
Section: Discussionmentioning
confidence: 99%
“…Based on its equivalent reformulation, Nesterov's extrapolation can be employed in MD to achieve optimal individual convergence. Obviously, it is interesting that whether we can extend Nesterov's extrapolation to other optimization methods such as alternating direction method of multiplier (ADMM) [28] and preconditioned SGD [23], and its application like that in [32] is also expected. All these issues will be considered in our future work.…”
Section: Discussionmentioning
confidence: 99%
“…For comparison, the stochastic estimate of mS2GD-RBB is written in a similar way as (9). Note that for mS2GD-RBB, v k is an unbiased estimator of the gradient, i.e., from (9), we have E[v k ] = ∇P (w k ).…”
Section: B the Proposed Methodsmentioning
confidence: 99%
“…Among the available methods to solve a SNEP, an elegant approach is to recast the problem as a stochastic variational inequality (SVI) [19]- [21]. The advantage of this approach is that there are many algorithms available for finding a solution of an SVI, some of them already applied to GANs [19], [22].…”
Section: B Stochastic Nash Equilibrium Problemsmentioning
confidence: 99%