2013 American Control Conference 2013
DOI: 10.1109/acc.2013.6580576
|View full text |Cite
|
Sign up to set email alerts
|

Rate of convergence analysis of discrete simultaneous perturbation stochastic approximation algorithm

Abstract: A middle point discrete version of simultaneous perturbation stochastic approximation (DSPSA) algorithm was previously introduced to solve the discrete stochastic optimization problem. We consider the rate of convergence of DSPSA in this paper. This rate will allow for objective comparisons with other discrete stochastic optimization methods.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…The D‐SPSA optimiser proposed by Wang et al. [38] calculates the objective function only at the integer positions. The optimiser needs only two measurements of the loss function at each iteration.…”
Section: Methodsmentioning
confidence: 99%
“…The D‐SPSA optimiser proposed by Wang et al. [38] calculates the objective function only at the integer positions. The optimiser needs only two measurements of the loss function at each iteration.…”
Section: Methodsmentioning
confidence: 99%
“…In this chapter, we discuss the rate of convergence property of DSPSA. We have shown partial and preliminary results of this chapter in Wang and Spall (2013). We set up an upper bound for the finite sample performance and calculate the asymptotic performance of DSPSA in the big-O sense.…”
Section: Rate Of Convergencementioning
confidence: 99%
“…Tuning the controller online by estimating the gradient of the goal function is an effective idea of the data-driven control. For instance, simultaneous perturbation stochastic approximation (SPSA) introduced by Spall estimates the gradient by stochastic approximation [4,5] and model free adaptive control (MFAC) proposed by Hou replaces the gradient with pseudo-partial derivative [6][7][8]. The idea of iterations also has good applications in data-driven method.…”
Section: Introductionmentioning
confidence: 99%