2017
DOI: 10.1016/j.colsurfa.2017.07.013
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of froth flotation responses based on various conditioning parameters by Random Forest method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
17
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 48 publications
(21 citation statements)
references
References 21 publications
0
17
0
Order By: Relevance
“…As a tree-based statistical model, random forest (RF) was constructed and developed by Breiman et al (1993) [23]. A powerful AI machine learning (ML) model can provide low-bias and low-variation outcomes, with highly accurate predictions [24][25][26][27]. In this system, an estimated value generates according to the average of overall trees through the bagging system; different bootstrap data L(θ) with size n would be selected from the training set (L) with size N. Each tree "T L(θ) " would be related to the random vector θ, which is given for the bagged samples from the main training set L. The final predictor "f " is the average over the forest (with y' η the estimated response for sample x η where K is the size of the ensemble) [28][29][30], as follows:…”
Section: Random Forestmentioning
confidence: 99%
“…As a tree-based statistical model, random forest (RF) was constructed and developed by Breiman et al (1993) [23]. A powerful AI machine learning (ML) model can provide low-bias and low-variation outcomes, with highly accurate predictions [24][25][26][27]. In this system, an estimated value generates according to the average of overall trees through the bagging system; different bootstrap data L(θ) with size n would be selected from the training set (L) with size N. Each tree "T L(θ) " would be related to the random vector θ, which is given for the bagged samples from the main training set L. The final predictor "f " is the average over the forest (with y' η the estimated response for sample x η where K is the size of the ensemble) [28][29][30], as follows:…”
Section: Random Forestmentioning
confidence: 99%
“…In addition, in most implementations, so-called out-of-bag or generalization errors are automatically calculated and their performance is not particularly sensitive to the few hyperparameters that are required to tune the models. As a consequence, the popularity of these models in the process industries is growing rapidly, with applications, for example, in predictive modeling [11,23], fault diagnosis and root cause analysis [24,25], and change point detection [26], as well as diagnostic modeling [27,28]. Random forests consist of ensembles of decision trees, as briefly summarized below.…”
Section: Random Forestsmentioning
confidence: 99%
“…Afghanistan mainly acquires its required electricity from neighbors. For more than a century, one of the most significant impediments in Afghanistan has been the lack of adequate fuel for power generation (Hare et al 2008;SanFilipo 2005;Wnuk 2016) and around 70% of Afghans do not have a reliable access to electricity (Doebrich et al 2006;Hackley et al 2010;Hare et al 2008;Tewalt et al 2010;Wnuk 2016). However, some studies have been conducted to find the potential coal deposits, and it was reported that Afghanistan has potentially abundant coal resources (Jacob 1961;Siebdrat and Weippert 1963;Wood et al 1983).…”
Section: Introductionmentioning
confidence: 99%