2016
DOI: 10.1016/j.neucom.2016.08.011
|View full text |Cite
|
Sign up to set email alerts
|

A double-layer ELM with added feature selection ability using a sparse Bayesian approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Sparse learning has been shown to be efficient at pruning the irrelevant parameters in many practical applications, by incorporating sparsity-promoting penalty functions into the original problem, where the added sparsity-promoting terms penalize the number of parameters (Kiaee et al (2016a;b;c)). Motivated by learning efficient architectures of a deep CNN for embedded implementations, our work focuses on the design of a sparse network using an initial pre-trained dense CNN.…”
Section: Introductionmentioning
confidence: 99%
“…Sparse learning has been shown to be efficient at pruning the irrelevant parameters in many practical applications, by incorporating sparsity-promoting penalty functions into the original problem, where the added sparsity-promoting terms penalize the number of parameters (Kiaee et al (2016a;b;c)). Motivated by learning efficient architectures of a deep CNN for embedded implementations, our work focuses on the design of a sparse network using an initial pre-trained dense CNN.…”
Section: Introductionmentioning
confidence: 99%
“…Assis Boldt et al 36 adopted three cascade combinations and extreme learning machine (ELM) for fault diagnosis. Kiaee et al 37 proposed a sparse Bayesian Double-Layer ELM to deal with high-dimensional data.…”
Section: Introductionmentioning
confidence: 99%