2021
DOI: 10.1016/j.cageo.2021.104688
|View full text |Cite
|
Sign up to set email alerts
|

A new strategy for spatial predictive mapping of mineral prospectivity: Automated hyperparameter tuning of random forest approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0
1

Year Published

2021
2021
2022
2022

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 68 publications
(22 citation statements)
references
References 33 publications
0
13
0
1
Order By: Relevance
“…Evaluasi kinerja klasifikasi dari model Decision Tree, Random Forest dan Deep Forest pada masing-masing dataset ReLink menggunakan nilai AUC (Area under the ROC Curve). AUC dipilih sebagai metode evaluasi karena AUC cocok untuk mengevaluasi nilai kinerja prediksi yang menggunakan dataset dengan permasalahan kelas tidak seimbang (class imbalance) [24].…”
Section: Evaluasiunclassified
“…Evaluasi kinerja klasifikasi dari model Decision Tree, Random Forest dan Deep Forest pada masing-masing dataset ReLink menggunakan nilai AUC (Area under the ROC Curve). AUC dipilih sebagai metode evaluasi karena AUC cocok untuk mengevaluasi nilai kinerja prediksi yang menggunakan dataset dengan permasalahan kelas tidak seimbang (class imbalance) [24].…”
Section: Evaluasiunclassified
“…The performance of the model can be further fine-tuned by varying the set of hyperparameters. The performance of the model is highly sensitive to the values of hyper parameters (Daviran et al, 2021), and several costeffective ways are available, for multi-criteria optimisation (Liu et al, 2017). The number of trees in the forest, the maximum number of features considered for splitting a node, the maximum depth of the tree, and the minimum number of samples used to split a node are varied in this study to improve the efficiency of the model.…”
Section: Machine Learning Algorithm and Performance Evaluationmentioning
confidence: 99%
“…Therefore, in our work, a k-fold cross validation method was used. (23) The k-fold cross validation divides the training data into k equal parts, uses k−1 of the k-divided training data for setting the search range of the hyperparameters, and validates the model performance using the remaining training data as validation data. The validation process is repeated k times to determine the hyperparameter with the lowest generalization error as the final model to which the test data is applied.…”
Section: Predictive Evaluation Using Machine Learning Techniquesmentioning
confidence: 99%