2023
DOI: 10.3390/en16093653
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Predictive Modeling of Tight Gas Well Productivity with SHAP and LIME Techniques

Abstract: Accurately predicting well productivity is crucial for optimizing gas production and maximizing recovery from tight gas reservoirs. Machine learning (ML) techniques have been applied to build predictive models for the well productivity, but their high complexity and low interpretability can hinder their practical application. This study proposes using interpretable ML solutions, SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), to provide explicit explanations of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 31 publications
(27 reference statements)
0
0
0
Order By: Relevance
“…It constructs a locally interpretable model near the prediction to approximate the complex model's decision boundary, revealing how each feature influences the prediction on a local scale [60]. Both methods play a crucial role in increasing the accountability of AI systems, showcasing their adaptability across various industries [59,61].…”
Section: Xai Local Explanationsmentioning
confidence: 99%
“…It constructs a locally interpretable model near the prediction to approximate the complex model's decision boundary, revealing how each feature influences the prediction on a local scale [60]. Both methods play a crucial role in increasing the accountability of AI systems, showcasing their adaptability across various industries [59,61].…”
Section: Xai Local Explanationsmentioning
confidence: 99%