2013
DOI: 10.2478/s11533-013-0342-5
|View full text |Cite
|
Sign up to set email alerts
|

Employing different loss functions for the classification of images via supervised learning

Abstract: Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions. Corres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…4) Loss function: The value of this metric is used to represent the goals of the model [29]. The value we have used for this setting is 'MultiClass' which represents that we are going to have more than two categories in our prediction results.…”
Section: B Length Of Stay Prediction Modelmentioning
confidence: 99%
“…4) Loss function: The value of this metric is used to represent the goals of the model [29]. The value we have used for this setting is 'MultiClass' which represents that we are going to have more than two categories in our prediction results.…”
Section: B Length Of Stay Prediction Modelmentioning
confidence: 99%
“…For any i = 1, ..., n the function g i : R n → R is convex and C-Lipschitz continuous, properties which allowed us to solve the problem (23) with algorithm (A3), by using Choosing µ k = 1 ak , for some parameter a ∈ R ++ and taking into account that L k = K + ak K 2 , for k ≥ 1, the iterative scheme (A3) with starting point x 0 = 0 ∈ R n becomes Initialization : t 1 = 1, y 1 = x 0 = 0 ∈ R n , a ∈ R ++ , For k ≥ 1 : µ k = 1 ak , L k = K + ak K 2 , Table 4.2: Average classification errors in percentage. C = 100 and as kernel parameter σ = 0.5, which are the optimal values reported in [4] for this data set from a given pool of parameter combinations, tested different values for a ∈ R ++ and performed for each of those choices a 10-fold cross validation on D. We terminated the algorithm after a fixed number of 10000 iterations was reached, the average classification errors being presented in Table 4.2. For a = 1e-3 we obtained the lowest missclassification rate of 0.2278 percentage.…”
Section: Support Vector Machines Classificationmentioning
confidence: 99%
“…The second numerical experiment we consider for the variable smoothing algorithm concerns the solving of the problem of classifying images via support vector machines classification, an approach which belong to the class of kernel based learning methods. The given data set consisting of 5268 images of size 200 × 50 was taken from a realworld problem a supplier of the automotive industry was faced with by establishing a computer-aided quality control for manufactured devices at the end of the manufacturing process (see [4] for more details on this data set). The overall task is to classify fine and defective components which are labeled by +1 and −1, respectively.…”
Section: Support Vector Machines Classificationmentioning
confidence: 99%
“…When readers after reading the whole works like the movies and these pictures show readers dynamically so as to bring readers a kind of artistic conception, it is through this kind of artistic conception that readers feel want to express the content of the work. In the following sections we will discuss the algorithm in detail [1][2][3][4].…”
Section: Introductionmentioning
confidence: 99%