2020
DOI: 10.36227/techrxiv.12149901
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hyper-Parameter Initialization for Squared Exponential Kernel-based Gaussian Process Regression

Abstract: Hyper-parameter optimization is an essential task in the use of machine learning techniques. Such optimizations are typically done starting with an initial guess provided to hyperparameter values followed by optimization (or minimization) of some cost function via gradient-based methods. The initial values become crucial since there is every chance for reaching local minimums in the cost functions being minimized, especially since gradient-based optimizing is done. Therefore, initializing hyper-parameters seve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…41,42 The implementation of GPC, via GPy, 44 was broken into the data-preprocessing phase (tabulating all computed metrics for each fiber tip of interest), the training phase (choice of a suitable kernel function and optimization routine), and the testing phase (testing the kernel function’s performance), as shown in Figure 5. Pre-processing data included tabulating input variables (microstructural metrics) and output variables (associated class of computed microstructural variables for each fiber tip of interest: 1 for micro-void nucleation and 0 for pristine), and then normalizing the entries of the microstructural metrics 45 to avoid numerical instability. 46 The training phase used the Matern 5/2 kernel and Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization routine.…”
Section: Data Analysis For Microstructural Metricsmentioning
confidence: 99%
“…41,42 The implementation of GPC, via GPy, 44 was broken into the data-preprocessing phase (tabulating all computed metrics for each fiber tip of interest), the training phase (choice of a suitable kernel function and optimization routine), and the testing phase (testing the kernel function’s performance), as shown in Figure 5. Pre-processing data included tabulating input variables (microstructural metrics) and output variables (associated class of computed microstructural variables for each fiber tip of interest: 1 for micro-void nucleation and 0 for pristine), and then normalizing the entries of the microstructural metrics 45 to avoid numerical instability. 46 The training phase used the Matern 5/2 kernel and Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization routine.…”
Section: Data Analysis For Microstructural Metricsmentioning
confidence: 99%