“…The gradient boost hyperparameters (grid search) were n estimators = 500,1000,2000, learning rate = 0.0001,.001,0.01,.1, max depth = 1,2,4, subsample = .5,.75,1, random state = 1. Hyperparameter tuning for XGBoost included learning rate = 0.001, 0.01, 0.05, 0.1, max depth = 3, 5, 7, 10, 20, min child weight = 1, 3, 5, subsample = 0.5, 0.7, colsample by tree = 0.5, 0.7, n estimators = 50, 100, 200, 500, 1000, objective = 'reg: squarederror'Grid search was employed for arriving at the best n-neighbors for KNN which were speci ed in a tuple as[2,3,4,5,6,7,8,9,11]. MARS was used to t the training data of all three datasets K fold cross-validation.…”