2023
DOI: 10.3390/pr11051339
|View full text |Cite
|
Sign up to set email alerts
|

Integration of Multiple Bayesian Optimized Machine Learning Techniques and Conventional Well Logs for Accurate Prediction of Porosity in Carbonate Reservoirs

Abstract: The accurate estimation of reservoir porosity plays a vital role in estimating the amount of hydrocarbon reserves and evaluating the economic potential of a reservoir. It also aids decision making during the exploration and development phases of oil and gas fields. This study evaluates the integration of artificial intelligence techniques, conventional well logs, and core analysis for the accurate prediction of porosity in carbonate reservoirs. In general, carbonate reservoirs are characterized by their comple… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 48 publications
0
6
0
Order By: Relevance
“…Pseudocode of the utilized in-house ANN model: Randomly assign initial weights and biases to the network. Conduct a forward pass, calculating the output of the network using the input vector, weights, biases, and transfer functions. Compare the network’s output to the desired response and compute the global error using the following formula: normalError = false 1 n 1 false 1 n 2 false( y t y p false) n 1 · n 2 Update the weights and biases by propagating backward through one of several gradient-based algorithms (scaled conjugate gradient, one-step secant, or Levenberg–Marquardt algorithm). The following convergence technique with an added acceleration term is used to speed up the network optimization process: w false( t + 1 false) = w false( t false) + β false[ normalΔ w false( t false) false] + α false[ w false( t 1 false) false] where α is the momentum constant, w is the weight value, Δ w is the weight change, t is the training epoch, and β is the learning constant. (α and β) constant are employed to increase the step size and decrease abrupt gradient changes, these learning and momentum constants are confined between 0 and 1. Recalculate the network output using the updated weights and biases by repeating steps 2–4. Report the final optimized set of weights and biases when the model achieves a predetermined level of accuracy or reaches the maximum number of iterations. …”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Pseudocode of the utilized in-house ANN model: Randomly assign initial weights and biases to the network. Conduct a forward pass, calculating the output of the network using the input vector, weights, biases, and transfer functions. Compare the network’s output to the desired response and compute the global error using the following formula: normalError = false 1 n 1 false 1 n 2 false( y t y p false) n 1 · n 2 Update the weights and biases by propagating backward through one of several gradient-based algorithms (scaled conjugate gradient, one-step secant, or Levenberg–Marquardt algorithm). The following convergence technique with an added acceleration term is used to speed up the network optimization process: w false( t + 1 false) = w false( t false) + β false[ normalΔ w false( t false) false] + α false[ w false( t 1 false) false] where α is the momentum constant, w is the weight value, Δ w is the weight change, t is the training epoch, and β is the learning constant. (α and β) constant are employed to increase the step size and decrease abrupt gradient changes, these learning and momentum constants are confined between 0 and 1. Recalculate the network output using the updated weights and biases by repeating steps 2–4. Report the final optimized set of weights and biases when the model achieves a predetermined level of accuracy or reaches the maximum number of iterations. …”
Section: Methodsmentioning
confidence: 99%
“…Pseudocode of the utilized in-house ANN model: 37 Randomly assign initial weights and biases to the network. Conduct a forward pass, calculating the output of the network using the input vector, weights, biases, and transfer functions.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…An integrative approach is adopted, leveraging core analysis data, conventional logs, and geological characteristics. Our work, which builds on recent advancements in productive zone determination using conventional logs [ [32] , [33] , [34] , [35] , [36] ], presents an innovative method for integrating diverse datasets to assess the quality of productive reservoir sections in carbonate reservoirs. The study's limitations due to the absence of advanced log data, such as DSI and NMR, and production tests like PLT and flowmeter, are acknowledged.…”
Section: Introductionmentioning
confidence: 99%
“…The learning process is split at the level of each sampling period, and consequently, new data structures are derived for each of them. With each new data structure, which is a collection of couples (state; control value), a generic regression function [22,27,28] is associated. The latter is materialized through an ML regression model devoted to the sampling period at hand, which must be capable of giving accurate predictions.…”
mentioning
confidence: 99%