2016
DOI: 10.1007/s10044-016-0548-9
|View full text |Cite
|
Sign up to set email alerts
|

Double-fold localized multiple matrix learning machine with Universum

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…What’s more, for each data set, 70% of samples are chosen in random as training samples and the remaining are for test. In order to get the truly experimental results, we adopt 10-fold cross validation strategy [ 10 ]. Moreover, one-against-one classification strategy is used for multi-class problems here [ 30 – 33 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…What’s more, for each data set, 70% of samples are chosen in random as training samples and the remaining are for test. In order to get the truly experimental results, we adopt 10-fold cross validation strategy [ 10 ]. Moreover, one-against-one classification strategy is used for multi-class problems here [ 30 – 33 ].…”
Section: Methodsmentioning
confidence: 99%
“…While for nonlinearly ones which are ubiquitous, nonlinear classifiers including NCC [ 2 ], FC-NTD [ 3 ], KMHKS [ 4 ], KSVM [ 5 ] are more suitable. One kind of nonlinear classifiers is kernel-based ones including MultiV-KMHKS [ 6 ], MVMHKS [ 7 ], RMVMHKS [ 8 ], DLMMLM [ 9 ], UDLMMLM [ 10 ], etc [ 11 13 ] and they adopt kernel functions to generate kernel matrices firstly and get optimal classifier parameters after the solution of these matrices. Here, for convenience, we summary full names and abbreviations for some terms in Table 1 .…”
Section: Introductionmentioning
confidence: 99%