2021
DOI: 10.48550/arxiv.2112.08411
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Error Analysis of Surrogate Models Constructed through Operations on Sub-models

Abstract: Model-based methods are popular in derivative-free optimization (DFO). In most of them, a single model function is built to approximate the objective function. This is generally based on the assumption that the objective function is one blackbox. However, some real-life and theoretical problems show that the objective function may consist of several blackboxes. In those problems, the information provided by each blackbox may not be equal. In this situation, one could build multiple sub-models that are then com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…In each iteration of MS-P we first construct G k according to (3) and ensure that for each j ∈ G k , f j (x k ) + (g k j ) s is a gradient-accurate model of f j on B(x k ; ∆ k ). We remark that, given a method to construct fully linear (and, therefore, gradient-accurate) models M of F , it is straightforward to show under Assumption 4 that h(M (x)) is a gradient-accurate model of h(F (x)); see [18,Theorem 32]. We then (approximately 4 ) solve the subproblem (6) to obtain (v k , s k ).…”
Section: High-level Discussion Of Ms-p and Goombahmentioning
confidence: 99%
“…In each iteration of MS-P we first construct G k according to (3) and ensure that for each j ∈ G k , f j (x k ) + (g k j ) s is a gradient-accurate model of f j on B(x k ; ∆ k ). We remark that, given a method to construct fully linear (and, therefore, gradient-accurate) models M of F , it is straightforward to show under Assumption 4 that h(M (x)) is a gradient-accurate model of h(F (x)); see [18,Theorem 32]. We then (approximately 4 ) solve the subproblem (6) to obtain (v k , s k ).…”
Section: High-level Discussion Of Ms-p and Goombahmentioning
confidence: 99%