2019
DOI: 10.1016/j.neucom.2019.03.086
|View full text |Cite
|
Sign up to set email alerts
|

Towards a more efficient and cost-sensitive extreme learning machine: A state-of-the-art review of recent trend

Abstract: In spite of the prominence of extreme learning machine model, as well as its excellent features such as insignificant intervention for learning and model tuning, the simplicity of implementation, and high learning speed, which makes it a fascinating alternative method for Artificial Intelligence, including Big Data Analytics, it is still limited in certain aspects. These aspects must be treated to achieve an effective and cost-sensitive model. This review discussed the major drawbacks of ELM, which include dif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 51 publications
(24 citation statements)
references
References 152 publications
0
24
0
Order By: Relevance
“…This work focuses on the use of extreme learning machines (ELM) for computational partial differential equations (PDE). ELM was originally developed in [26,27] for linear classification/regression problems with single hidden-layer feed-forward neural networks, and has since found wide applications in a number of fields; see the reviews in [25,1] and the references therein. Two strategies underlie ELM: (i) random but fixed (non-trainable) hidden-layer coefficients, and (ii) trainable linear output-layer coefficients determined by a linear least squares method or by using the pseudo-inverse of the coefficient matrix (for linear problems).…”
mentioning
confidence: 99%
“…This work focuses on the use of extreme learning machines (ELM) for computational partial differential equations (PDE). ELM was originally developed in [26,27] for linear classification/regression problems with single hidden-layer feed-forward neural networks, and has since found wide applications in a number of fields; see the reviews in [25,1] and the references therein. Two strategies underlie ELM: (i) random but fixed (non-trainable) hidden-layer coefficients, and (ii) trainable linear output-layer coefficients determined by a linear least squares method or by using the pseudo-inverse of the coefficient matrix (for linear problems).…”
mentioning
confidence: 99%
“…The main feature is that the input weight of the neuron in input layer to the neuron in hidden layer and the threshold of neuron in hidden layer can be randomly given and do not need to be adjusted, and the learning process only needs to calculate the output weight [47][48][49]. Therefore, ELM learns faster than traditional artificial neural networks while ensuring learning accuracy [50]. The structure of ELM is presented in Figure 4.…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…Distributed smart space orchestration system (Ds2os) was the second IoT-related benchmark and included a collection of traces captured in a networking domain for IoT. 1 The data had been collected from the application layer; hence, they differed significantly from the conventional feature-based patterns used by network-traffic classifiers. The main dataset included various sources, such as light controllers, thermometers, person detection sensors, washing machines, batteries, thermostats, smart doors, and smart phones.…”
Section: Internet Of Things Benchmarksmentioning
confidence: 99%
“…First, the proposed designs mostly rely on reconfigurable platforms such as field programmable gate arrays (FPGAs) [9,10,37,50], which may prove quite expensive. By contrast, implementations on micro-controllers or microcomputers have drawn limited attention, in spite of the fact that these devices best fit IoT applications and remarkably shrink the time-to-market of commercial products [1,17].…”
Section: Introductionmentioning
confidence: 99%