2013
DOI: 10.12732/ijpam.v87i6.2
|View full text |Cite
|
Sign up to set email alerts
|

SVM Trade-Off Between Maximize the Margin and Minimize the Variables Used for Regression

Abstract: Machine Learning is considered as a subfield of Artificial Intelligence and it is concerned with the development of techniques and methods which enable the computer to learn. In classification problems generalization control is obtained by maximizing the margin, which corresponds to minimization of the weight vector. The minimization of the weight vector can be used in regression problems, with a loss function. The problem of classification for linearly separable data and introduces the concept of margin and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 8 publications
0
2
0
1
Order By: Relevance
“…SVM melakukan klasifikasi dengan cara memilih batas keputusan yang memaksimalkan (maximum Margin Classifier) jarak dari titik data terdekat dari semua kelas. Batas keputusan yang dibuat oleh SVM disebut pengklasifikasi margin maksimum(maximal margin classifier) atau bidang hiper (Hyperplane) margin maksimum [7]. Secara sederhana neural network dibentuk dari dari 3 macam layer, input layer berfungsi menerima masukkan, output layer berfungsi memprediksi hasil akhir keluaran dan di tengah-tengahnya terdapat hidden layer yang berfungsi melakukan sebagian besar komputasi yang dibutuhkan oleh jaringan [8].…”
Section: E Random Forestunclassified
“…SVM melakukan klasifikasi dengan cara memilih batas keputusan yang memaksimalkan (maximum Margin Classifier) jarak dari titik data terdekat dari semua kelas. Batas keputusan yang dibuat oleh SVM disebut pengklasifikasi margin maksimum(maximal margin classifier) atau bidang hiper (Hyperplane) margin maksimum [7]. Secara sederhana neural network dibentuk dari dari 3 macam layer, input layer berfungsi menerima masukkan, output layer berfungsi memprediksi hasil akhir keluaran dan di tengah-tengahnya terdapat hidden layer yang berfungsi melakukan sebagian besar komputasi yang dibutuhkan oleh jaringan [8].…”
Section: E Random Forestunclassified
“…To illustrate this notion, consider this two-dimensional non-linearly separable case in which only an ellipsoid can separate the data points in the original space x1 x2. [7] Figure 2-ellipsoid separating the data points However, with some mapping of x1 and x2 into a 3D space using two-degree polynomials (Figure -3), it is possible to transform the problem from no linearly separable problem into a linearly separable problem in a higher dimensional space. Technically, a kernel computes the dot product between the data points in higher dimensional space [8].…”
Section: Support Vector Non Optimal Hyperplanesmentioning
confidence: 99%
“…To illustrate this notion, consider this two-dimensional non-linearly separable case in which only an ellipsoid (figure 2) can separate the data points in the original space x1 x2. [7] Figure 2: Ellipsoid separating the data points However, with some mapping of x1 and x2 into a 3D space using two-degree polynomials (Figure 3), it is possible to transform the problem from a linearly separable problem into a linearly separable problem in a higher dimensional space. Technically, a kernel computes the dot product between the data points in higher dimensional space.…”
Section: A Support Vector Machines (Svm)mentioning
confidence: 99%