2022
DOI: 10.1007/s13369-022-07240-3
|View full text |Cite
|
Sign up to set email alerts
|

Explicit Neural Network-Based Models for Bubble Point Pressure and Formation Volume Factor Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 95 publications
0
6
0
Order By: Relevance
“…This is because the R and MSE values obtained for the various networks are closer to 1, with their respective MSE values closer to zero. From a statistical standpoint, the R and MSE values obtained for the networks trained with clip-normalized datasets strongly indicate the networks' good prediction performance[25,28].…”
mentioning
confidence: 88%
See 4 more Smart Citations
“…This is because the R and MSE values obtained for the various networks are closer to 1, with their respective MSE values closer to zero. From a statistical standpoint, the R and MSE values obtained for the networks trained with clip-normalized datasets strongly indicate the networks' good prediction performance[25,28].…”
mentioning
confidence: 88%
“…where 𝑦 π‘ π‘π‘Žπ‘™π‘’π‘‘ denotes the scaled values for input or output parameters, 𝑦 𝑖 is the values of the non-normalized parameters, 𝑦 π‘šπ‘–π‘› and 𝑦 π‘šπ‘Žπ‘₯ represent the minimum and maximum values of the non-normalized parameters, respectively. According to Okon and Ansa [24] and Okon et al [25], normalizing the datasets for the neural network training is necessary for the following reasons: adequate adjustment of the network connecting weights for optimum prediction and reducing the sensitivity of the sigmoidal (i.e. transfer or activation) function to large datasets values.…”
Section: Data Acquisition and Preparationmentioning
confidence: 99%
See 3 more Smart Citations