SEG Technical Program Expanded Abstracts 2012 2012
DOI: 10.1190/segam2012-1013.1
|View full text |Cite
|
Sign up to set email alerts
|

Using kernel principal component analysis to interpret seismic signatures of thin shaly-sand reservoirs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…The basic idea of kernel principal component analysis (KPCA) is to project each sample vector x k in the input space R N into the high-dimensional feature space F by introducing a nonlinear function φ, and then Principal Component Analysis is performed in high dimensional space [8][9][10]. Duo to the high dimensional characteristics of the feature space, the nonlinear mapping and the decomposition of characteristic variables become very difficult, but the nonlinear mapping can be avoided by introducing the kernel technique [11][12][13], and the point product of the original space variables is only needed to be calculated [6].…”
Section: Kernel Principal Component Analysis (Kpca)mentioning
confidence: 99%
“…The basic idea of kernel principal component analysis (KPCA) is to project each sample vector x k in the input space R N into the high-dimensional feature space F by introducing a nonlinear function φ, and then Principal Component Analysis is performed in high dimensional space [8][9][10]. Duo to the high dimensional characteristics of the feature space, the nonlinear mapping and the decomposition of characteristic variables become very difficult, but the nonlinear mapping can be avoided by introducing the kernel technique [11][12][13], and the point product of the original space variables is only needed to be calculated [6].…”
Section: Kernel Principal Component Analysis (Kpca)mentioning
confidence: 99%
“…The low-order DCT coefficients express most of the variability of the original signal, and the model compression is simply accomplished by zeroing the numerical coefficients of the basis functions beyond a certain threshold. However, each compression technique must be applied taking in mind that part of the information in the original (unreduced) parameter space could be lost in the reduced space and thus, the model parameterization must always constitute a compromise between model resolution and model uncertainty (Aleardi, 2020;Aleardi & Salusti, 2021;Dejtrakulwong et al, 2012;Fernández-Martínez et al, 2017;Grana et al, 2019;Lochbühler et al, 2014). Similarly, data parameterization must constitute a compromise between good data resolution (our ability to match the observed data) and accurate uncertainty quantification.…”
Section: Introductionmentioning
confidence: 99%
“…In this case, the full state space is projected onto a limited number of basis functions and the algorithm generates samples in this reduced domain. This technique must be applied taking in mind that part of the information in the original (unreduced) parameter space could be lost in the reduced space, and, for this reason, the model parameterization must always constitute a compromise between model resolution and model uncertainty (Dejtrakulwong et al ., 2012; Lochbühler et al ., 2014; Aleardi, 2019; Grana et al ., 2019; Aleardi, 2020b).…”
Section: Introductionmentioning
confidence: 99%