Proceedings of the 2003 American Control Conference, 2003.
DOI: 10.1109/acc.2003.1244030
|View full text |Cite
|
Sign up to set email alerts
|

Extremum seeking using analog nonderivative optimizers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…Since T is a diffeomorphism on D, then T −1 is continuous differentiable and we further assume ∂ T −1 ∂ x is bounded on D, then T −1 Lipschitz continuous on D for some Lipschitz constant L T . From Equation (13), we have (14) where e z and φ µ (t, |e z |) will converge to zero as t → ∞ from (11) and (12), and we know thatr k (t) = z k+1 when t = mT for some positive integer m. Therefore, theoretically it may take infinity time for the state x to converge to the set point x k+1 and this is not a desired property we want. However, the robustness of optimization algorithms can relax the requirement for perfect regulation, the optimization algorithm is still functional as long as regulating the state to a neighborhood of x k+1 .…”
Section: Determination Of Regulation Timementioning
confidence: 85%
See 1 more Smart Citation
“…Since T is a diffeomorphism on D, then T −1 is continuous differentiable and we further assume ∂ T −1 ∂ x is bounded on D, then T −1 Lipschitz continuous on D for some Lipschitz constant L T . From Equation (13), we have (14) where e z and φ µ (t, |e z |) will converge to zero as t → ∞ from (11) and (12), and we know thatr k (t) = z k+1 when t = mT for some positive integer m. Therefore, theoretically it may take infinity time for the state x to converge to the set point x k+1 and this is not a desired property we want. However, the robustness of optimization algorithms can relax the requirement for perfect regulation, the optimization algorithm is still functional as long as regulating the state to a neighborhood of x k+1 .…”
Section: Determination Of Regulation Timementioning
confidence: 85%
“…Then the performance function in the steady state is parameterized by the control argument only and can be optimized by tuning the control argument directly. Thus, the extremum seeking loop is designed by perturbation method [9], sliding mode [10], [11] or other analog optimizers [12]. The first rigorous proof [13] of local stability of perturbation-based extremum seeking control uses averaging analysis and singular perturbation, where a high-pass filter and slow perturbation signal are employed to estimate the gradient to form a continuous gradient descent update law for the control argument.…”
Section: Introductionmentioning
confidence: 99%
“…The aforementioned applications of extremum seeking use analog optimization based extremum seeking control which reduces to a one dimensional optimization problem where several methods, such as singular perturbation, Lyapunov functions, sliding modes and averaging, are available for designing stable control laws [29][30][31]. Gradient, or its estimation, based extremum seeking control is the most straightforward approach [32,33], but the requirement for continuous measurement and approximation of the gradient or the Hessian is very strong.…”
Section: Introductionmentioning
confidence: 99%
“…Gradient estimationbased extremum seeking control is studied in [7], numerical optimization-based ones can be found in [8] and [9]. Extremum seeking control based on sliding mode or continuous time non-derivative optimizers can be found in [1] and [10] respectively. Many applications have been studied recently, such as optimizing bioreactor [11], combustion instability [12], electromechanical valve actuator [13], thermoacoustic cooler [14], and human exercise machine [15].…”
Section: Introductionmentioning
confidence: 99%