All Days 2016
DOI: 10.2118/180277-ms
|View full text |Cite
|
Sign up to set email alerts
|

Incorporation of Bootstrapping and Cross-Validation for Efficient Multivariate Facies and Petrophysical Modeling

Abstract: An integrated multivariate statistics procedure was adopted for the accurate Lithofacies classification prediction to be incorporated with well log attributes into core permeability modeling. Logistic Boosting Regression and Generalized Linear Modeling were adopted for Lithofacies Classification and core permeability estimation, respectively. Logistic Boosting Regression (LogitBoost) was used to model the lithofacies sequences given well log and core data to predict the discrete lithofacies distribution at mis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…The K-fold cross-validation helps in avoiding overfitting and estimating good results. 56 Figure 14 reflects the highest R 2 value and least MSE for optimized 10 folds, authenticating the outputs and validating the opted workflow performance. The performance of various algorithms was evaluated using statistical metrics such as MAE, RMSE, and R 2 , etc., as demonstrated in Figures 15 and 16, and it revealed that the ETR performed the best for predicting the volume of shale and effective porosities with a maximum correlation coefficient of 1.…”
Section: Petrophysical Interpretation-advanced Machine-learning Methodsmentioning
confidence: 56%
See 1 more Smart Citation
“…The K-fold cross-validation helps in avoiding overfitting and estimating good results. 56 Figure 14 reflects the highest R 2 value and least MSE for optimized 10 folds, authenticating the outputs and validating the opted workflow performance. The performance of various algorithms was evaluated using statistical metrics such as MAE, RMSE, and R 2 , etc., as demonstrated in Figures 15 and 16, and it revealed that the ETR performed the best for predicting the volume of shale and effective porosities with a maximum correlation coefficient of 1.…”
Section: Petrophysical Interpretation-advanced Machine-learning Methodsmentioning
confidence: 56%
“…Their means reflect the score of algorithms, as mentioned before. The K-fold cross-validation helps in avoiding overfitting and estimating good results Figure reflects the highest R 2 value and least MSE for optimized 10 folds, authenticating the outputs and validating the opted workflow performance.…”
Section: Resultsmentioning
confidence: 99%
“…The random sampling cross-validation was implemented on the entire experiment through sampling and dividing the given data set into two groups: 30% testing set for forecast and prediction while 70% training set for building the model. 43 More explicitly, the training subset was used as the base for cumulative oil production modeling, obtained by the reservoir simulator, as a function of operational parameters. The prediction is then utilized for testing subset data using the simulator as well as the proxy model.…”
Section: Proxy Modelingmentioning
confidence: 99%
“…Cross‐validation is necessary to maximize the opportunity of achieving global optima and to enhance the forecast precision of the proxy model. The random sampling cross‐validation was implemented on the entire experiment through sampling and dividing the given data set into two groups: 30% testing set for forecast and prediction while 70% training set for building the model 43 . More explicitly, the training subset was used as the base for cumulative oil production modeling, obtained by the reservoir simulator, as a function of operational parameters.…”
Section: Proxy Modelingmentioning
confidence: 99%
“…After obtaining the dataset, it is necessary to consider the division of the dataset for training, validation and testing [43]. A previous study shows different dataset partition has a certain influence on the training results and reasonable dataset partitioning can effectively improve the accuracy of the model [28].…”
Section: Machine Learning Training Designmentioning
confidence: 99%