2020
DOI: 10.1007/978-981-15-5772-9_5
|View full text |Cite
|
Sign up to set email alerts
|

Intelligent Data Analytics Approaches for Predicting Dissolved Oxygen Concentration in River: Extremely Randomized Tree Versus Random Forest, MLPNN and MLR

Abstract: Control of water quality by monitoring water variables is still of major importance for the protection of human life (El Najjar et al. 2019). Rivers and streams are the major's component of the freshwater ecosystems and constitute the source of life for both humans and animals and they become the most "endangered ecosystems" in the world (Kumar and Jayakumar 2020; Kebede et al. 2020;Emenike et al. 2020). From year to year, a considerable work was carried out to maintain a good water quality status (Jerves-Cobo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 47 publications
0
1
0
Order By: Relevance
“…It is an algorithm that constructs an ensemble of un-pruned decision trees in a top-down fashion. The approach involves random cut-point selection for node splitting, rather than using bootstrap replicas, and utilizes the entire learning sample to grow the trees (Heddam, 2021). Moreover, this algorithm substantially reduces the prediction model's variance, which helps prevent overfitting, improves generalization performance, and slightly increases the model's bias, which may introduce a small trade-off between bias and variance.…”
Section: Extreme Gradient Boosting (Xgboost)mentioning
confidence: 99%
“…It is an algorithm that constructs an ensemble of un-pruned decision trees in a top-down fashion. The approach involves random cut-point selection for node splitting, rather than using bootstrap replicas, and utilizes the entire learning sample to grow the trees (Heddam, 2021). Moreover, this algorithm substantially reduces the prediction model's variance, which helps prevent overfitting, improves generalization performance, and slightly increases the model's bias, which may introduce a small trade-off between bias and variance.…”
Section: Extreme Gradient Boosting (Xgboost)mentioning
confidence: 99%
“…While recognizing that this modeling strategy has the advantage of having a simple and direct mathematical formulation, the poor generalization capabilities, in some cases, and the extensive research to determine the best model parameters using a hard trial-and-error process, alternative approaches that are based on model optimization using MOAs, can be seen as complementary alternative approaches that lead to the better selection of the model parameters. The literature review reveals that several single models for DO prediction are available in the literature, such as: the long short-term memory (LSTM) deep neural network model [14,15]; deterministic models (i.e., MINLAKE2018) [16]; the linear dynamic system and filtering model [17]; the gated recurrent unit (GRU) deep neural network model [18]; support vector regression (SVR) [19]; the stochastic vector auto regression (SVAR) model [20]; the radial basis function neural network (RBFNN) [21]; the multilayer perceptron neural network (MLPNN) [22]; polynomial chaos expansions (PCE) [23]; random forest regression (RFR) [24]; the extremely randomized tree [25]; and multivariate adaptive regression splines (MARS) [26].…”
Section: Introductionmentioning
confidence: 99%