2004
DOI: 10.1016/j.patrec.2003.12.012
|View full text |Cite
|
Sign up to set email alerts
|

Symbolization assisted SVM classifier for noisy data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2005
2005
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…[36][37][38]. While some work on the influence of scaling and discretisation of continuous attributes [39][40][41] exists, the effect of coding of categorical attributes has to our best knowledge not been investigated.…”
Section: Support Vector Machinesmentioning
confidence: 99%
See 1 more Smart Citation
“…[36][37][38]. While some work on the influence of scaling and discretisation of continuous attributes [39][40][41] exists, the effect of coding of categorical attributes has to our best knowledge not been investigated.…”
Section: Support Vector Machinesmentioning
confidence: 99%
“…Obviously, the resulting dataset depends on the definition of the critical boundaries x c between two adjacent symbols. As an unfavourable choice of values may lead to a loss of meaningful information [40,41], the DPP choice of discretisation is not without risk. Popular variants of discretisation are analysed [18], confirming their relevance for classifier performance.…”
Section: Data Projectionmentioning
confidence: 99%
“…Support vector machine (SVM) is a powerful state‐of‐the‐art data mining tool that uses machine learning theory to maximize predictive accuracy while automatically avoiding over‐fit to the data . SVM systems use the input data into a high dimensional feature space and subsequently carry out the linear regression in the feature space.…”
Section: Methodsmentioning
confidence: 99%
“…SVM systems use the input data into a high dimensional feature space and subsequently carry out the linear regression in the feature space. SVMs approximate the function in the following form : f ( x ) = i n φ ( x i ) ω + b where n is the total number of input–output pairs, φ( x ) is called as the feature map, x is the input space, f ( x ) is the output, and w and b are the coefficients. w and b are estimated by minimizing: min w , b , ξ , ξ * J ( w , ξ , ξ * , b ) = 1 2 | | w | | 2 + C i ( ξ i + ξi* ) subject to: y i φ T ( x i ) w b ɛ + ξ i φ T ( x i ) w + b y i ɛ + ξi* where C is a regularized constant determining the tradeoff between the training error and the model flatness, ε is a prescribed parameter of the ε‐insensitive loss function, ξ and ξ* are positive slack variables for the data points.…”
Section: Methodsmentioning
confidence: 99%
“…Support vector machines (SVM) [14][15][16][17][18][19][20][21][22][23][24][25] are a powerful state-of-the-art data mining algorithm for nonlinear input-output knowledge discovery. In SVM, the idea is to map the input data into a high dimensional feature space and subsequently carry out the linear regression in the feature space.…”
Section: Support Vector Machinementioning
confidence: 99%