2019
DOI: 10.1080/0305215x.2019.1584618
|View full text |Cite
|
Sign up to set email alerts
|

An efficient local search-based genetic algorithm for constructing optimal Latin hypercube design

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(11 citation statements)
references
References 36 publications
0
10
0
Order By: Relevance
“…On the other hand, the Latin hypercube sampling provides a stratified sampling framework for improved coverage of the k-dimensional input space (e.g. McKay et al, 2000;Helton and Davis, 2003;Iman, 2008;Clifford et al, 2014;Shields and Zhang, 2016;Shang et al, 2020). Conditioned Latin hypercube sampling is an attempt to draw a sample that captures the variation of multiple environmental variables.…”
Section: Uncertainties In Global Mean Temperaturementioning
confidence: 99%
“…On the other hand, the Latin hypercube sampling provides a stratified sampling framework for improved coverage of the k-dimensional input space (e.g. McKay et al, 2000;Helton and Davis, 2003;Iman, 2008;Clifford et al, 2014;Shields and Zhang, 2016;Shang et al, 2020). Conditioned Latin hypercube sampling is an attempt to draw a sample that captures the variation of multiple environmental variables.…”
Section: Uncertainties In Global Mean Temperaturementioning
confidence: 99%
“…Additionally, the Monte Carlo samples can contain some points clustered closely, while other intervals within the space obtain no sample. On the other hand, the Latin hypercube sam-pling provides a stratified sampling framework for improved coverage of the k-dimensional input space (e.g., McKay et al, 2000;Helton and Davis, 2003;Iman, 2008;Clifford et al, 2014;Shields and Zhang, 2016;Shang et al, 2020). Conditioned Latin hypercube sampling is an attempt to draw a sample that captures the variation in multiple environmental variables.…”
Section: Subsample Of Hyperparameter Ensemble Datamentioning
confidence: 99%
“…According to publication [31], population sizes were 20 × dimensions for small size LHDs, 10 × dimensions for medium and large size LHDs, respectively, in PermGA, while elite size, crossover and mutation rates were 5, 0.8, 0.05 [15], respectively. For LSGA, the author of literature [25] suggested that population number P, mutation probability m p , parameters max p , min p and distance ratio c were 10, 0.2, 0.3, 0.01 and 0.5, respectively, while there was no parameter settings for ILS.…”
Section: A Experimental Settingmentioning
confidence: 99%
“…Husslage et al [28] made a comparison of simulated annealing (SA), ESE and PermGA algorithms and the results showed that the ESE algorithm found better results than SA and PermGA algorithms for almost all of the cases. Moreover, the performance of ESE algorithm for establishment of OLHD with high space-filling quality was further validated in literatures [14], [22], [23] and [25] through comparing with SOBSA, GA, SLE, SLHD, LSGA and a novel extension algorithms. The results revealed that the ESE algorithm is a significantly efficient and robust algorithm for optimizations of LHDs within 10 dimensions.…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation