2022
DOI: 10.1021/acs.est.2c02232
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Dynamic Riverine Nitrogen Export in Unmonitored Watersheds: Leveraging Insights of AI from Data-Rich Regions

Abstract: Terrestrial export of nitrogen is a critical Earth system process, but its global dynamics remain difficult to predict at a high spatiotemporal resolution. Here, we use deep learning (DL) to model daily riverine nitrogen export in response to hydrometeorological and anthropogenic drivers. Long short-term memory (LSTM) models for the daily concentration and flux of dissolved inorganic nitrogen (DIN) were built in a coastal watershed in southeastern China with a typical subtropical monsoon climate. The DL models… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(5 citation statements)
references
References 70 publications
0
5
0
Order By: Relevance
“…Unknown reasons behind predictions have been a shortcoming limiting DL driven by artificial intelligence (AI) to be fully harnessed in application fields. , To compensate for the shortcoming, explainable AI or explainable DL has become an emerging research area as it makes the black-box models more comprehensible and strengthens users’ confidence in the prediction . Many feature importance methods have been adopted to enhance the model explainability such as SHAP. , SHAP is based on the unification of game theory and local explanations and can be used for global and local explainability analysis of models .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Unknown reasons behind predictions have been a shortcoming limiting DL driven by artificial intelligence (AI) to be fully harnessed in application fields. , To compensate for the shortcoming, explainable AI or explainable DL has become an emerging research area as it makes the black-box models more comprehensible and strengthens users’ confidence in the prediction . Many feature importance methods have been adopted to enhance the model explainability such as SHAP. , SHAP is based on the unification of game theory and local explanations and can be used for global and local explainability analysis of models .…”
Section: Methodsmentioning
confidence: 99%
“…Nevertheless, the structure also makes its black-box nature obvious, and its internal principles and mechanisms are unknown, which has led to controversy. 28 And explainable DL is now an emerging research front in many fields, and there have been attempts in the field of wastewater treatment process modeling. Hwangbo et al developed a deep neural network for process modeling and considered global sensitivity analysis based on variance decomposition to identify the key parameters.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The Shapley value method (SHAP) quantifies the contribution of each participant in a cooperative game when their actions result in joint outcomes (Shapley, 1953). It can be used to measure feature importance in DL methods (Lundberg et al., 2020; Lundberg & Lee, 2017; Xiong et al., 2022). In this study, we evaluate the influence of each pollutant source via SHAP based on the groundwater contamination at spatial locations of interest predicted by our proposed aGNN.…”
Section: Methodsmentioning
confidence: 99%
“…With the advancement of explanation techniques, it has become increasingly possible to obtain generalizable, machine‐captured patterns. In this study, we apply two DL explanation techniques that have already shown promising results in the field of surface water research (Jiang et al., 2022; Xiong et al., 2022). The EG technique relies on the gradient of a model's output with respect to its input features to identify the contributions of each input (Erion et al., 2021).…”
Section: Methodsmentioning
confidence: 99%