2012
DOI: 10.1016/j.mineng.2012.05.008
|View full text |Cite
|
Sign up to set email alerts
|

Interpretation of nonlinear relationships between process variables by use of random forests

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
74
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 159 publications
(89 citation statements)
references
References 24 publications
1
74
0
Order By: Relevance
“…To visualize the direction and nature of the relationship between the most important predictors and each response, and thus assess how the influence of each predictor scales across lakes, we constructed partial dependency plots (PDP) for the most important predictors from each RF model. Partial dependence is computed by predicting the response (e.g., DCM depth or thickness) from the RF over a range of values for the variable of interest, while holding all other variables in the dataset constant (Friedman ; also see Auret and Aldrich for an excellent description). In essence, a PDP represents the relationship between a single variable and the response, after accounting for the average effects or interactions of the other predictors in the model (Carlisle et al ).…”
Section: Methodsmentioning
confidence: 99%
“…To visualize the direction and nature of the relationship between the most important predictors and each response, and thus assess how the influence of each predictor scales across lakes, we constructed partial dependency plots (PDP) for the most important predictors from each RF model. Partial dependence is computed by predicting the response (e.g., DCM depth or thickness) from the RF over a range of values for the variable of interest, while holding all other variables in the dataset constant (Friedman ; also see Auret and Aldrich for an excellent description). In essence, a PDP represents the relationship between a single variable and the response, after accounting for the average effects or interactions of the other predictors in the model (Carlisle et al ).…”
Section: Methodsmentioning
confidence: 99%
“…Explanation by simplification Decision Tree/Prototype: [84,118,122] Feature relevance explanation Feature importance / contribution: [103,104,240,241] Visual explanation Variable importance / attribution: [104,241] [242,243] Support Vector Machines…”
Section: Ensembles and Multiple Classifier Systemsmentioning
confidence: 99%
“…Also, determination of the significance of weakly important variables is difficult, as correlations among predictor variables will also influence the variable importance. 31 Therefore, the importance measure was calculated in conjunction with the following process.…”
Section: Variable Order Of Importancementioning
confidence: 99%