The support vector machine (SVM) remains a popular classifier for its excellent generalization performance and applicability of kernel methods; however, it still requires tuning of a regularization parameter, C , to achieve optimal performance. Regularization path-following algorithms efficiently solve the solution at all possible values of the regularization parameter relying on the fact that the SVM solution is piece-wise linear in C . The SVMPath originally introduced by Hastie et al., while representing a significant theoretical contribution, does not work with semidefinite kernels. Ong et al. introduce a method improved SVMPath (ISVMP) algorithm, which addresses the semidefinite kernel; however, Singular Value Decomposition or QR factorizations are required, and a linear programming solver is required to find the next C value at each iteration. We introduce a simple implementation of the path-following algorithm that automatically handles semidefinite kernels without requiring a method to detect singular matrices nor requiring specialized factorizations or an external solver. We provide theoretical results showing how this method resolves issues associated with the semidefinite kernel as well as discuss, in detail, the potential sources of degeneracy and cycling and how cycling is resolved. Moreover, we introduce an initialization method for unequal class sizes based upon artificial variables that work within the context of the existing path-following algorithm and do not require an external solver. Experiments compare performance with the ISVMP algorithm introduced by Ong et al. and show that the proposed method is competitive in terms of training time while also maintaining high accuracy.
Abstract-In this paper, we present the evolution of adaptive resonance theory (ART) neural network architectures (classifiers) using a multiobjective optimization approach. In particular, we propose the use of a multiobjective evolutionary approach to simultaneously evolve the weights and the topology of three well-known ART architectures; fuzzy ARTMAP (FAM), ellipsoidal ARTMAP (EAM), and Gaussian ARTMAP (GAM). We refer to the resulting architectures as MO-GFAM, MO-GEAM, and MO-GGAM, and collectively as MO-GART. The major advantage of MO-GART is that it produces a number of solutions for the classification problem at hand that have different levels of merit [accuracy on unseen data (generalization) and size (number of categories created)]. MO-GART is shown to be more elegant (does not require user intervention to define the network parameters), more effective (of better accuracy and smaller size), and more efficient (faster to produce the solution networks) than other ART neural network architectures that have appeared in the literature. Furthermore, MO-GART is shown to be competitive with other popular classifiers, such as classification and regression tree (CART) and support vector machines (SVMs).
Abstract-Existing active set methods reported in the literature for support vector machine (SVM) training must contend with singularities when solving for the search direction. When a singularity is encountered, an infinite descent direction can be carefully chosen that avoids cycling and allows the algorithm to converge. However, the algorithm implementation is likely to be more complex and less computationally efficient than would otherwise be required for an algorithm that does not have to contend with the singularities. We show that the revised simplex method introduced by Rusin provides a guarantee of nonsingularity when solving for the search direction. This method provides for a simpler and more computationally efficient implementation, as it avoids the need to test for rank degeneracies and also the need to modify factorizations or solution methods based upon those rank degeneracies. In our approach, we take advantage of the guarantee of nonsingularity by implementing an efficient method for solving the search direction and show that our algorithm is competitive with SVM-QP and also that it is a particularly effective when the fraction of nonbound support vectors is large. In addition, we show competitive performance of the proposed algorithm against two popular SVM training algorithms, SVMLight and LIBSVM.Index Terms-Active set method, null space method, quadratic programming, revised simplex method, support vector machine.
Abstract-The Support Vector Machine is a widely employed machine learning model due to its repeatedly demonstrated superior generalization performance. The Sequential Minimal Optimization (SMO) algorithm is one of the most popular SVM training approaches. SMO is fast, as well as easy to implement; however, it has a limited working set size (2 points only). Faster training times can result if the working set size can be increased without significantly increasing the computational complexity. In this paper, we extend the 2-point SMO formulation to a 4-point formulation and address the theoretical issues associated with such an extension. We show that modifying the SMO algorithm to increase the working set size is beneficial in terms of the number of iterations required for convergence, and shows promise for reducing the overall training time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.