2021
DOI: 10.1016/j.patcog.2020.107644
|View full text |Cite
|
Sign up to set email alerts
|

Faster SVM training via conjugate SMO

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…In the equation, p-best and g-best represent the local and global optimal positions of the particle swarm, respectively. When c 1 � 0, it means that there is no ability to adjust according to its own optimal value, which is called the global PSO algorithm [10]. At this time, the particles have the ability to expand the search space.…”
Section: Classification Methods Of Network Ideological and Political Resources Based On Improved Svm Algorithm Optimizationmentioning
confidence: 99%
“…In the equation, p-best and g-best represent the local and global optimal positions of the particle swarm, respectively. When c 1 � 0, it means that there is no ability to adjust according to its own optimal value, which is called the global PSO algorithm [10]. At this time, the particles have the ability to expand the search space.…”
Section: Classification Methods Of Network Ideological and Political Resources Based On Improved Svm Algorithm Optimizationmentioning
confidence: 99%
“…However, there are certain challenges of applying SVM over large datasets. Targeting these challenges, ample research has been conducted in recent years [17], [18], [19], [20], [21].…”
Section: Svm For Large Datasetsmentioning
confidence: 99%
“…Applying the suggested methods, results are extracted from synthetic and real data, concluding that the use of such techniques brings good results that reduce the computational cost necessary for the execution of the model. Finally, Torres et al [41] improved a version of the SMO algorithm for training classification and regression SVMs, based on a Conjugate Descent procedure, decreasing the number of iterations needed for convergence. These cited methods are based on some sophisticated concepts to improve the computational performance, modifying some ideas of the basic theory of SVM.…”
Section: Svm Applied To Large Databasesmentioning
confidence: 99%
“…On the other hand, we observed that the method used in predicting students' performance is based on a shallow architecture and predictive result failure to capture the relationships among attributes in a massive data set, and a similar conclusion was already presented [57,58] and others in similar works. It is also worth mentioning the fact that the work developed is easily extendable for other contexts and methods, and there is also the possibility for parallelization adequacy, which guarantees an even greater computational time gain, or even a combination within other methodologies applied in large databases [37,41].…”
Section: Final Considerationsmentioning
confidence: 99%