Over the past two decades, Support Vector Machine (SVM) has been a popular supervised machine learning model, and plenty of distinct algorithms are designed separately based on different KKT conditions of SVM model for classification/regression with the different losses, including the convex loss or non-convex loss. In this paper, we propose an algorithm that can train different SVM models in a unified scheme. Firstly, we introduce a definition of the LS-DC loss and show that the most commonly used losses in the SVM community are LS-DC loss or can be approximated by LS-DC loss. Then based on DCA (difference of convex algorithm), we propose a unified algorithm, called UniSVM that can solve the SVM model with any convex or non-convex LS-DC loss, in which only a vector is computed especially by the specifically chosen loss. Particularly, for training robust SVM models with non-convex losses, UniSVM has a dominant advantage over all the existing algorithms, because it has a closed-form solution per iteration while the existing ones always need to solve an L1/L2-SVM per iteration. Furthermore, by the low-rank approximation of the kernel matrix, UniSVM can solve the large-scale nonlinear problems with efficiency. Finally, to verify the efficacy and feasibility of the proposed algorithm, experiments on large benchmark data sets with/without outliers for classification and regression are investigated. UniSVM can be easily grasped by users or researchers since its core code in Matlab is less than 10 lines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.