Numerical Nonsmooth Optimization 2020
DOI: 10.1007/978-3-030-34910-3_19
|View full text |Cite
|
Sign up to set email alerts
|

Model-Based Methods in Derivative-Free Nonsmooth Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 74 publications
0
11
0
Order By: Relevance
“…Moreover, in the practical implementation of our algorithms, we have included a weight 0 ≤ ω ≤ 1 in the quadratic term of the models, that is we replaced 1 2 s B k s with ω 2 s B k s in Problems (2) and (27). In such a way, the users of our solver can tune it to the degree of nonsmoothness of their problems.…”
Section: Implementation and Numerical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, in the practical implementation of our algorithms, we have included a weight 0 ≤ ω ≤ 1 in the quadratic term of the models, that is we replaced 1 2 s B k s with ω 2 s B k s in Problems (2) and (27). In such a way, the users of our solver can tune it to the degree of nonsmoothness of their problems.…”
Section: Implementation and Numerical Resultsmentioning
confidence: 99%
“…In a way similar to the application of direct-search methods to nonsmooth functions, our convergence results state that the Clarke generalized derivative is nonnegative at any limit point of a subsequence of unsuccessful iterates, along any direction in the unit sphere, assuming some form of asymptotic density of the vectors randomly generated for the linear terms of the models. The Hessian of the quadratic term added to the max-linear model term does not have to be 1 An anonymous Referee has drawn our attention to the recent works [2,22] (the latter one of trust-region type). However, both require the calculation of subgradients of approximate or nearby subdifferentials, therefore only applicable when the nonsmoothness of the objective function is known through some algebraic or composite form.…”
Section: Introductionmentioning
confidence: 99%
“…3.1 Sample set Z k , gradient set D k , and generator set G k Manifold sampling is an iterative method that builds component models m Fi of each F i around the current point x k . We place the first-order terms of each model in column i of the matrix ∇M (x k ) ∈ R n×p as in (2).…”
Section: Manifold Sampling For Piecewise-smooth Compositionsmentioning
confidence: 99%
“…Derivative-Free Optimization (DFO) is the mathematical study of algorithms for continuous optimization that do not use first-order information [1]. In general, DFO methods can be categorized into model-based methods and direct search methods.…”
Section: Introductionmentioning
confidence: 99%
“…1 , then it is Lipschitz continuous on any convex compact set K with constant max Lemma 29 Suppose the mapping g = (g 1 , ..., g m…”
mentioning
confidence: 99%