2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967736
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Optimization for Policy Search in High-Dimensional Systems via Automatic Domain Selection

Abstract: Bayesian Optimization (BO) is an effective method for optimizing expensive-to-evaluate black-box functions with a wide range of applications for example in robotics, system design and parameter optimization. However, scaling BO to problems with large input dimensions (>10) remains an open challenge. In this paper, we propose to leverage results from optimal control to scale BO to higher dimensional control tasks and to reduce the need for manually selecting the optimization domain. The contributions of this pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…In the context of unconstrained optimal control, Marco et al (104) introduced a parametric cost function l(x, u, θ l ) = x T Q(θ l )x + u T R(θ l )u and optimized the parameters θ l in order to compensate for deviations of the true dynamics f t from linear prediction dynamics f, such that the performance metric in closed loop is improved. In a similar setup, Fröhlich et al (105) investigated learning in higher-dimensional spaces by automatic domain adaptation. For constrained optimal control, the approach of Bansal et al (106) instead uses Bayesian optimization to learn the parameters θ f of a linear prediction model f(x, u, θ f ) = A(θ f )x + B(θ f )u in an MPC, similarly with the goal of improving the closed-loop performance of a specific task for unknown, potentially nonlinear and stochastic true system dynamics f t .…”
Section: Bayesian Optimization For Controller Tuningmentioning
confidence: 99%
“…In the context of unconstrained optimal control, Marco et al (104) introduced a parametric cost function l(x, u, θ l ) = x T Q(θ l )x + u T R(θ l )u and optimized the parameters θ l in order to compensate for deviations of the true dynamics f t from linear prediction dynamics f, such that the performance metric in closed loop is improved. In a similar setup, Fröhlich et al (105) investigated learning in higher-dimensional spaces by automatic domain adaptation. For constrained optimal control, the approach of Bansal et al (106) instead uses Bayesian optimization to learn the parameters θ f of a linear prediction model f(x, u, θ f ) = A(θ f )x + B(θ f )u in an MPC, similarly with the goal of improving the closed-loop performance of a specific task for unknown, potentially nonlinear and stochastic true system dynamics f t .…”
Section: Bayesian Optimization For Controller Tuningmentioning
confidence: 99%
“…These functions are also known as objectives with 'active subspaces' [12] or 'multi-ridge' [23,52]. They are frequently encountered in applications, typically when tuning (over)parametrized models and processes, such as in hyper-parameter optimization for neural networks [3], heuristic algorithms for combinatorial optimization problems [32], complex engineering and physical simulation problems [12] as in climate modelling [35], and policy search and dynamical system control [57,24].…”
Section: Introductionmentioning
confidence: 99%
“…A few approaches have been proposed in this context to handle input uncertainty. 53,54 Most recently, Fröhlich et al 55 have introduced an acquisition function for GP-based Bayesian optimization for the identification of robust optima. This formulation is analytically intractable and the authors propose two numerical approximation schemes.…”
Section: Background and Related Workmentioning
confidence: 99%