2010
DOI: 10.1016/j.engappai.2010.06.009
|View full text |Cite
|
Sign up to set email alerts
|

Performance enhancement of extreme learning machine for multi-category sparse data classification problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
63
0
1

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 133 publications
(65 citation statements)
references
References 8 publications
1
63
0
1
Order By: Relevance
“…The ELM is a single hidden layer feed-forwarded ANN model, of which the input weights are calculated randomly, while the output weights are calculated analytically. Nondifferentiable or discrete activation functions can also be used in the hidden layer of the ELM, in addition to activation functions like sigmoidal, sine, Gaussian, and hard-limit [23]. Conventional feed-forwarded ANNs depend on some certain parameters like momentum or learning rate.…”
Section: Extreme Learning Machinementioning
confidence: 99%
See 1 more Smart Citation
“…The ELM is a single hidden layer feed-forwarded ANN model, of which the input weights are calculated randomly, while the output weights are calculated analytically. Nondifferentiable or discrete activation functions can also be used in the hidden layer of the ELM, in addition to activation functions like sigmoidal, sine, Gaussian, and hard-limit [23]. Conventional feed-forwarded ANNs depend on some certain parameters like momentum or learning rate.…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…Changing the momentum value may prevent the error from focusing on a local point, but it will not have any influence on the long learning process. The input weights and threshold values are produced randomly, but the output weights are obtained analytically in ELM [23]. The ELM network is the customized form of a single hidden layer feed-forwarded ANN model.…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…For input data set X = {x k }, let the desired outcome data from the network be Y = {y j } and the real outcome of the network be O = {o k } , where k ∈ [1, M ] represents the number of consequent input/output vectors. The mathematical description of the network having M neurons in the hidden layer, as given in Figure 4, can be expressed mathematically as [18]:…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…The activation functions such as sigmoid, sine, Gaussian, and hard limit are used in the hidden layer; however, the linear activation function is used in the output layer. The nonderivative and discrete activation functions can be used in the ELM [18].…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…Yang et al [7] proposed evolutionary based ELM with differential evolution to balance the explorative power and exploitive power and to reduce the prediction time of original ELM. Suresh et al [8] presented a real-coded genetic algorithm to determine the input weights, optimal hidden neuron numbers, and bias values of ELM. In the study, proposed model gets a compact network with better generalization but it causes a higher computational effort.…”
Section: Introductionmentioning
confidence: 99%