“…The study concluded that random tree classifier was the winner under various performance metrics. RUSBoost [120,137,146] Not mentioned [137] LogitBoost [94,103,120] GentleBoost [53,120] LPBoost [62] RealBoost [53,56] MultiBoost [56] CatBoost [107,147] ModestBoost [53] Random subspace [46,137,146] Rotation forest [55,56,70,85,110,111] Tree Maximum probability voting [68,106,135] Product probability voting [68,135] Sum probability voting [76] Minimum probability voting [68,145] Median probability voting [106,145] Bayesian [98] Al-Jarrah et al [80] proposed a semi-supervised multi-layered clustering (SMLC) model for IDSs. The performance of SMLC was compared with that of supervised ensemble ML models and a well-known semi-supervised model (i.e., tri-training).…”