“…This process involved training competitive models on nine randomly selected data folds, while one-fold was reserved for testing the performance in each run. The procedure was performed using the grid search method by testing a span of learning rate (0.1, 0.01 and 0.001), tree complexity (1-4), and number of trees (50-10 0 0, step 50) for BRT, number of interactions (50-250, step 50), degrees of freedom (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12) and shrinkage (0.25-1, step 0.25) for AdaBoost, and gamma (0-5, step 1), interaction depth (1-4), shrinkage (0.1-0.5, step 0.1) and number of rounds (10-100, step 10) for XGBoost. Monotonic responses [ 4 ] were positively or negatively forced to reduce overfitting according to expected outcomes on species distribution.…”