2015
DOI: 10.1016/j.neunet.2015.03.013
|View full text |Cite
|
Sign up to set email alerts
|

Incremental learning for ν-Support Vector Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 404 publications
(40 citation statements)
references
References 24 publications
0
34
0
Order By: Relevance
“…An appropriate model can save time and get a satisfactory accuracy. In this paper, we studied six kinds of machine-learning algorithms, which are Random Forest, Decision Tree, SVM (Support Vector Machine) [34], KNN (k-Nearest Neighbor) [16], Naive Bayes [20] and Discriminant Analysis. These algorithms are widely used in activity recognition and are always efficient.…”
Section: Model Selectionmentioning
confidence: 99%
“…An appropriate model can save time and get a satisfactory accuracy. In this paper, we studied six kinds of machine-learning algorithms, which are Random Forest, Decision Tree, SVM (Support Vector Machine) [34], KNN (k-Nearest Neighbor) [16], Naive Bayes [20] and Discriminant Analysis. These algorithms are widely used in activity recognition and are always efficient.…”
Section: Model Selectionmentioning
confidence: 99%
“…To be more precise, ν is an upper bound on the fraction of margin errors and a lower bound of the fraction of support vectors. In addition, with probability 1, asymptotically, ν equals to both fractions (Gu et al, 2015). Therefore, it is easier to tune parameter ν than ε-SVR.…”
Section: Equivalent Formulation Of ν-Svrmentioning
confidence: 99%
“…Unfortunately, it is rather difficult to select an appropriate C. To address this issue, Schölkopf et al (2000) proposed the ν-SVR, which uses a new parameter ν to replace the parameter C. Moreover, it is easier to tune parameter ν than C. However, compared with the dual problem of ε-SVR, two complications are introduced in ν-SVR. The first one is that the box constraints are related to C and the length of the training samples and the second one is that one more inequality constraint is introduced (Gu et al, 2015). Moreover, as proved in Gu et al (2012), it will not guarantee that a feasible updating path can always be generated.…”
Section: Introductionmentioning
confidence: 99%
“…In order to effectively extract fault features of bearing from the vibration signal, many scholars have proposed many effective methods, such as short time Fourier transform (STFT), wavelet transform (WT), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD), entropy, and so on [2][3][4][5]. Seker and Ayaz [6] proposed a new method to extract features from the measured vibration signals in motors subjected to accelerated bearing fluting aging and to detect the effects of bearing fluting at each aging cycle of induction motors.…”
Section: Introductionmentioning
confidence: 99%