2017
DOI: 10.1007/s00521-017-3237-8
|View full text |Cite
|
Sign up to set email alerts
|

Improvement on projection twin support vector machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…The parameters of each algorithm are set as follows: For the BPMLL, the number of hidden neurons is set to 20% of the input dimension, and the number of training epochs is 100. For the Rank-SVM, the kernel function parameter and penalty parameter c are selected from 2 −6 , ..., 2 0 , ..., 2 6 . For the MLTSVM, the penalty parameters c k and regularization parameter λ k are selected from 2 −6 , ..., 2 0 , ..., 2 6 .…”
Section: Parameter Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…The parameters of each algorithm are set as follows: For the BPMLL, the number of hidden neurons is set to 20% of the input dimension, and the number of training epochs is 100. For the Rank-SVM, the kernel function parameter and penalty parameter c are selected from 2 −6 , ..., 2 0 , ..., 2 6 . For the MLTSVM, the penalty parameters c k and regularization parameter λ k are selected from 2 −6 , ..., 2 0 , ..., 2 6 .…”
Section: Parameter Settingmentioning
confidence: 99%
“…For the Rank-SVM, the kernel function parameter and penalty parameter c are selected from 2 −6 , ..., 2 0 , ..., 2 6 . For the MLTSVM, the penalty parameters c k and regularization parameter λ k are selected from 2 −6 , ..., 2 0 , ..., 2 6 . For the SS-MLLSTSVM, the penalty parameters c k1 and regularization parameters c k2 , c k3 are selected from 2 −6 , ..., 2 0 , ..., 2 6 .…”
Section: Parameter Settingmentioning
confidence: 99%
“…Support vector machine (SVM) proposed in 1995 is a machine learning algorithm based on the VC dimension theory of statistical theory and the structural risk minimization principle [28]. In SVM, a classification hyper-plane is created and regarded as the decision surface, which separates positive from negative samples and maximizes the isolation edge between them [29].…”
Section: Introductionmentioning
confidence: 99%
“…From then on, various improved algorithms based on PTSVM are proposed [24]- [34], e.g. RPTSVM [24], LSPTSVM [25], [26], IPTSVM [27], LIWLSPTSVM [28], PNPSVM [29], NPTSVM [30], PTSVR [31] and other variants PTSVM algorithms [32]- [34]. Although LSTSVM has been presented by using the squared loss function instead of hinge loss function in TWSVM and obtains very fast training speed since two QPPs are replaced by two systems of linear equations, but may result in the reduction of classification ability and the characteristic of constructing two nonparallel hyperplanes may be weakened [35].…”
Section: Introductionmentioning
confidence: 99%