2020
DOI: 10.1007/s10044-020-00891-8
|View full text |Cite|
|
Sign up to set email alerts
|

Deep kernel learning in extreme learning machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 23 publications
0
13
0
Order By: Relevance
“…The kernel function can overcome the curse of dimensionality, and the samples that are linearly inseparable in the original space can be non-linearly mapped to a higher-dimensional space to make them linearly separable, thus improving the classification accuracy. Based on the superior performance of the kernel function, Huang [ 36 ] introduced the kernel function into ELM and proposed the KELM algorithm to further enhance the generalization ability and stability. In this paper, the kernel function and DELM are combined to construct a Deep Kernel Extreme Learning Machine (DKELM).…”
Section: Proposed Cs-dkelm Methodsmentioning
confidence: 99%
“…The kernel function can overcome the curse of dimensionality, and the samples that are linearly inseparable in the original space can be non-linearly mapped to a higher-dimensional space to make them linearly separable, thus improving the classification accuracy. Based on the superior performance of the kernel function, Huang [ 36 ] introduced the kernel function into ELM and proposed the KELM algorithm to further enhance the generalization ability and stability. In this paper, the kernel function and DELM are combined to construct a Deep Kernel Extreme Learning Machine (DKELM).…”
Section: Proposed Cs-dkelm Methodsmentioning
confidence: 99%
“…k is the convolutional layer, s is the input data size and d is the convolution result size. Rectified Linear Unit (ReLu) [21] is used as an activation function to introduce non-linearity to the model, as demonstrated in Eq. ( 5):…”
Section: Feature Extraction Using Zfnetmentioning
confidence: 99%
“…Therefore, the details of recent advances made in supervised learning methods will be reviewed. LR and SVM are often used for ETD [21]. When the dataset is small, these strategies work better.…”
Section: Introductionmentioning
confidence: 99%
“…Wang et al [67] present a unifying framework for deep learning and multiple kernel learning, a work of theoretical interest for those interested in using both schemes. Afzal et al [1] also explore the possibility of using arc-cosine kernel layers and fast methods to produce parameter matrices. Liu et al [40] presented a linearized kernel sparse representative classifier applied to subcortical brain segmentation achieving relevant image segmentation results.…”
Section: Related Workmentioning
confidence: 99%
“…The algorithm's core idea is to map the original space X into a kernelized space, induced by the set of references and the kernel function, where the mapped dataset X become a relatively simple instance for the internal classifier. 1 If memory is scarce, it is also possible to compute X elements online; however, this decision will trade speed by memory. Moreover, some of the steps can be specialized to avoid some computations; for instance, we can avoid the computation of {σ i } for FFT and K-Means, since it is a sideproduct of these algorithms.…”
Section: Learning and Prediction Proceduresmentioning
confidence: 99%