“…For any i = 1, ..., n the function g i : R n → R is convex and C-Lipschitz continuous, properties which allowed us to solve the problem (23) with algorithm (A3), by using Choosing µ k = 1 ak , for some parameter a ∈ R ++ and taking into account that L k = K + ak K 2 , for k ≥ 1, the iterative scheme (A3) with starting point x 0 = 0 ∈ R n becomes Initialization : t 1 = 1, y 1 = x 0 = 0 ∈ R n , a ∈ R ++ , For k ≥ 1 : µ k = 1 ak , L k = K + ak K 2 , Table 4.2: Average classification errors in percentage. C = 100 and as kernel parameter σ = 0.5, which are the optimal values reported in [4] for this data set from a given pool of parameter combinations, tested different values for a ∈ R ++ and performed for each of those choices a 10-fold cross validation on D. We terminated the algorithm after a fixed number of 10000 iterations was reached, the average classification errors being presented in Table 4.2. For a = 1e-3 we obtained the lowest missclassification rate of 0.2278 percentage.…”