“…Some of our considered features have high dimensions, such as DGDV, GoT, and cSGDV. So, as in our previous work [17,18], for each feature, we consider: (i) the full feature (i.e., no dimensionality reduction), (ii) linear dimensionality reduction via principal component analysis (PCA) that considers as few principal components as needed to account for at least 90% of variation in the data corresponding to the given feature, and (iii)-(viii) nonlinear dimensionality reduction via t-distributed stochastic neighbor embedding (tSNE) under six different perplexity parameters (5,13,21,30,40,50). This totals to 1 + 1 + 6 = 8 considered dimensionality reduction choices.…”