2019
DOI: 10.1007/s00158-019-02404-6
|View full text |Cite
|
Sign up to set email alerts
|

A model-independent adaptive sequential sampling technique based on response nonlinearity estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 41 publications
0
4
0
Order By: Relevance
“…Without the clustering penalty, the error-maximization sampling stopped after five iterations corresponding to 150 sampled points, an increase of 30 sampling points compared to the case with clustering penalty. The resulting surrogate model has an error in the same range although the MRAE is reduced for the surrogate model of reaction (16) while the RRMSE is increased for both reactions. One potential reasoning for this behavior can be seen in the simultaneous sampling approach.…”
Section: Surrogate Model Developmentmentioning
confidence: 98%
See 2 more Smart Citations
“…Without the clustering penalty, the error-maximization sampling stopped after five iterations corresponding to 150 sampled points, an increase of 30 sampling points compared to the case with clustering penalty. The resulting surrogate model has an error in the same range although the MRAE is reduced for the surrogate model of reaction (16) while the RRMSE is increased for both reactions. One potential reasoning for this behavior can be seen in the simultaneous sampling approach.…”
Section: Surrogate Model Developmentmentioning
confidence: 98%
“…One potential reasoning for this behavior can be seen in the simultaneous sampling approach. The error-maximization sampling will focus on the surrogate model for reaction (16), as the MRAE is worse. Correspondingly, fewer points are sampled in the region in which the surrogate model of reaction ( 20) is worse, which resulted in an increase in the MRAE.…”
Section: Surrogate Model Developmentmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, an efficient optimization algorithm for identifying the new sample and the leave-one-out cross validation weightings are also discussed. Garbo and German used nonlinearity index as the refinement metric and the mean neighborhood distance for exploration. They balanced the two metrics via a stochastic Pareto-ranking-based selection criterion that attempts to simultaneously maximize both refinement and exploration.…”
Section: Literature Review and Analysismentioning
confidence: 99%
“…To ensure that models meet the accuracy requirements, it is common to iteratively refine them through adaptive sampling sometimes also labeled as infill or active learning. Various criteria are available, and the interested reader is referred to the literature for a more in-depth introduction [12][13][14]. Accounting for all levels of fidelity at hand during the adaptive sampling stage should result in reduced overall computational cost necessary to perform a certain investigation, as the available information is used efficiently.…”
Section: Introductionmentioning
confidence: 99%