2014
DOI: 10.1002/qre.1654
|View full text |Cite
|
Sign up to set email alerts
|

On Weighted Support Vector Regression

Abstract: We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 24 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…At any rate, reducing the effect of noise in the training data by emphasizing the most reliable observations could enhance predictive ability. For instance, with weighted SVR it is possible to consider different contributions for each data point to the learning function by introducing a weighting vector in the risk function to be minimized (Han & Clemmensen, 2014), whereas for RF, tree‐level weights can be incorporated to strengthen the contribution of the more accurate trees in the final ensembled prediction (Winham et al . 2013).…”
Section: Resultsmentioning
confidence: 99%
“…At any rate, reducing the effect of noise in the training data by emphasizing the most reliable observations could enhance predictive ability. For instance, with weighted SVR it is possible to consider different contributions for each data point to the learning function by introducing a weighting vector in the risk function to be minimized (Han & Clemmensen, 2014), whereas for RF, tree‐level weights can be incorporated to strengthen the contribution of the more accurate trees in the final ensembled prediction (Winham et al . 2013).…”
Section: Resultsmentioning
confidence: 99%
“…To address this relative importance in the trajectory estimation, increased weight is given to the last samples. In the case of SVR, the sample weighting re-scales the C parameter, which means that the model puts more emphasis on getting these points right [37]. To compare the experimental results, a three-degree polynomial regression is also implemented.…”
Section: B Resultsmentioning
confidence: 99%
“…In fact, some models, for instance support vector machine (SVM) models, have strong prediction capability partly because of its inherent feature extraction procedures in themselves. SVM increases the dimension of the feature space by mapping the original features into a new feature space (kernel-induced space), and regularizes a Lasso model in the kernel-induced space and finally decreases the effective dimension of the solution [3].…”
Section: Introductionmentioning
confidence: 99%