We propose Twin SVM, a binary SVM classifier that determines two nonparallel planes by solving two related SVM-type problems, each of which is smaller than in a conventional SVM. The Twin SVM formulation is in the spirit of proximal SVMs via generalized eigenvalues. On several benchmark data sets, Twin SVM is not only fast, but shows good generalization. Twin SVM is also useful for automatically discovering two-dimensional projections of the data.
In twin support vector machines (TWSVMs), we determine pair of non-parallel planes by solving two related SVM-type problems, each of which is smaller than the one in a conventional SVM. However, similar to other classification methods, the performance of the TWSVM classifier depends on the choice of the kernel. In this paper we treat the kernel selection problem for TWSVM as an optimization problem over the convex set of finitely many basic kernels, and formulate the same as an iterative alternating optimization problem. The efficacy of the proposed classification algorithm is demonstrated with some UCI machine learning benchmark datasets.
Given a dataset, where each point is labeled with one of M labels, we propose a technique for multicategory proximal support vector classification via generalized eigenvalues (MGEPSVMs). Unlike Support Vector Machines that classify points by assigning them to one of M disjoint half-spaces, here points are classified by assigning them to the closest of M non-parallel planes that are close to their respective classes. When the data contains samples belonging to several classes, classes often overlap, and classifiers that solve for several nonparallel planes may often be able to better resolve test samples. In multicategory classification tasks, a training point may have similarities with prototypes of more than one class. This information can be used in a fuzzy setting. We propose a fuzzy multi-category classifier that utilizes information about the membership of training samples, to improve the generalization ability of the classifier. The desired classifier is obtained by using one-from-rest (OFR) separation for each class, i.e. 1 : M − 1 classification. Experimental results demonstrate the efficacy of the proposed classifier over MGEPSVMs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.