2016
DOI: 10.1109/tnnls.2016.2574332
|View full text |Cite
|
Sign up to set email alerts
|

Budget Online Learning Algorithm for Least Squares SVM

Abstract: Batch-mode least squares support vector machine (LSSVM) is often associated with unbounded number of support vectors (SVs'), making it unsuitable for applications involving large-scale streaming data. Limited-scale LSSVM, which allows efficient updating, seems to be a good solution to tackle this issue. In this paper, to train the limited-scale LSSVM dynamically, we present a budget online LSSVM (BOLSSVM) algorithm. Methodologically, by setting a fixed budget for SVs', we are able to update the LSSVM model acc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 24 publications
0
14
0
Order By: Relevance
“…the number of new samples, which poses both computational and memory issues especially for data streaming. Motivated by the online budgeted learning algorithms [15]- [20], we propose a budgeted online KRR (abbr. BOKRR) in which the number of input vectors in the active set is bounded by a predefined number, i.e., budget size B.…”
Section: B Budgeted Online Kernel Ridge Regressionmentioning
confidence: 99%
See 2 more Smart Citations
“…the number of new samples, which poses both computational and memory issues especially for data streaming. Motivated by the online budgeted learning algorithms [15]- [20], we propose a budgeted online KRR (abbr. BOKRR) in which the number of input vectors in the active set is bounded by a predefined number, i.e., budget size B.…”
Section: B Budgeted Online Kernel Ridge Regressionmentioning
confidence: 99%
“…The algorithm applying this strategy would be named as Sliding Windows Online KRR hereinafter. Reference [15] presents a strategy based on maximum similarity criterion. It removes the sample that is most similar to the current active set.…”
Section: Budget Maintenancementioning
confidence: 99%
See 1 more Smart Citation
“…Instead of training the classifier again from scratch on the combined training set, the online learning algorithm incrementally updates the classifier to incorporate new examples into the learning model [21]. In this way, online learning can significantly save on computation costs and be more suitable to deal with large scale problems.…”
Section: Related Workmentioning
confidence: 99%
“…However, it can be approximated arbitrarily well by the optimal mapping in the class by varying the class parameter, and it can be determined only when the complete data stream is observed. 2 In addition to achieving, we might well outperform since our approach is data driven and based on combination of partitions, i.e., we do not rely on a single fixed partition. 3 The convergence rates given here samples our general regret results (after averaging over T ) in the case of BT partitioning.…”
mentioning
confidence: 99%