2010
DOI: 10.1007/s10898-010-9558-0
|View full text |Cite
|
Sign up to set email alerts
|

DC models for spherical separation

Abstract: Spherical separation, DC functions, DCA,

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 37 publications
(22 citation statements)
references
References 14 publications
0
22
0
Order By: Relevance
“…The correctness of the following computational simulation can be estimated by the total percentage of well classified points (of both sets A and B) when the algorithm stops. Table 1 represents the average of the tenfold cross-validation results of computational testing of the local search algorithm, where we used the notations as follows: n is the space dimension; M and N stand for the numbers of points to be classified in the sets A and B, respectively; C stands for the values of parameter C in the goal function of Problem (P 0 ); F 0 = F(x 0 , r 0 ) is the starting value of the goal function of Problem (P 0 ); F(z) is the value of the function at the critical point (z = (x, r )) provided by SLSM; iter is the number of Linearized Problems solved (iterations of SLSM); time is the CPU time of computing solutions (seconds); % stands for the percentage of well classified points; % [10] stands for the percentage of well-classified points in the paper [10].…”
Section: Testing the Local Search Methods (Dca)mentioning
confidence: 99%
See 2 more Smart Citations
“…The correctness of the following computational simulation can be estimated by the total percentage of well classified points (of both sets A and B) when the algorithm stops. Table 1 represents the average of the tenfold cross-validation results of computational testing of the local search algorithm, where we used the notations as follows: n is the space dimension; M and N stand for the numbers of points to be classified in the sets A and B, respectively; C stands for the values of parameter C in the goal function of Problem (P 0 ); F 0 = F(x 0 , r 0 ) is the starting value of the goal function of Problem (P 0 ); F(z) is the value of the function at the critical point (z = (x, r )) provided by SLSM; iter is the number of Linearized Problems solved (iterations of SLSM); time is the CPU time of computing solutions (seconds); % stands for the percentage of well classified points; % [10] stands for the percentage of well-classified points in the paper [10].…”
Section: Testing the Local Search Methods (Dca)mentioning
confidence: 99%
“…Moreover, the field of data sets classification is enlarging ceaselessly and grasps new areas [4][5][6][7][8][9][10][11]. In addition, all separation problems turn out to be nonconvex, and therefore they need new mathematical apparatus to find a global solution, in particular, to escape local pits provided by local search algorithms including the classical ones (conjugate gradients methods, Newtonian ones, SQP ones, IMP etc).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Whereas nonlinear boundaries are obtained by using the kernel trick with a nonlinear kernel, an alternative approach is to construct classifiers whose classification regions are bounded by surfaces of predetermined shapes, such as piecewise linear regions, piecewise conic regions, spherical or ellipsoidal surfaces, e.g. [173,12,14,15,222,85,6,8]. The resulting optimization problems have been addressed using Nonsmooth and Global Optimization techniques [7,13].…”
Section: Related Distance-based Approachesmentioning
confidence: 99%
“…For instance, in many applications, such as in classification problems [2,3], f is a polyhedral max-function corresponding to an easy optimization problem, that is, either a convex (linear) program [7,11,14] or a nonconvex one that can still be solved efficiently [12]. In this case, one can use sensitivity analysis techniques on the problem to determine the minimum valueλ ≤ 1 so that the optimal solution for the f -problem at x (λ = 1) is still optimal for λ =λ (and therefore for all values in between).…”
Section: Alternative Upper Modelsmentioning
confidence: 99%