2016
DOI: 10.1016/j.asoc.2016.01.028
|View full text |Cite
|
Sign up to set email alerts
|

A self-organizing cascade neural network with random weights for nonlinear system modeling

Abstract: In this paper, a self-organizing cascade neural network (SCNN) with random weights is proposed for nonlinear system modeling. This SCNN is constructed via simultaneous structure and parameter learning processes. In structure learning, the units, which lead to the maximal error reduction of the network, are selected from the candidates and added to the existing network one by one. A stopping criterion based on the training and validation errors is introduced to select the optimal network size to match with a gi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(21 citation statements)
references
References 40 publications
(70 reference statements)
0
19
0
Order By: Relevance
“…Neural networks can be used to simulate linear and non-linear processes, optimisation, classification and control [5,6,14]. For the sake of modelling the processes taking place in the environment, the network termed the multilayer perceptron (MLP) is used most frequently.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Neural networks can be used to simulate linear and non-linear processes, optimisation, classification and control [5,6,14]. For the sake of modelling the processes taking place in the environment, the network termed the multilayer perceptron (MLP) is used most frequently.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, to be able to predict metastable states in good advance, and to make it possible for the treatment plant staff to select optimum settings of the bioreactor parameters to ensure effective wastewater treatment, it is necessary to develop mathematical models that are capable of determining continuous value or the linguistic value. In the former case, the discrete values of the measured dependent variable are predicted [4][5][6]. In the latter case, the range of variation of the measurement results provides a basis for a division into classes.…”
Section: Introductionmentioning
confidence: 99%
“…MAPE is the average of absolute errors divided by actual observation values. MSE is probably the most commonly used error metric [44,45]; it penalizes larger errors, because squaring larger numbers has a greater impact than squaring smaller numbers. MAD is the sum of absolute differences between the actual value and the forecast divided by the number of observations [46][47][48].…”
Section: Validating the Artificial Neural Networkmentioning
confidence: 99%
“…In the process of searching optimal parameters, the vector described as Equation (4), is represented by a particle. The objective of training FNN is to improve the prediction performance of FNN model and the fitness function has been formulated as Equation (5). The process of optimizing the FNN parameters by IPSO can be described as follows.…”
Section: Optimizing the Parameters Of Fnn With Ipsomentioning
confidence: 99%
“…However, the relationship is highly nonlinear in nature so that it is hard to develop a comprehensive mathematic model. To deal with this kind of problem, the commonly used methods are fuzzy theory and neural networks [4][5][6]. A fuzzy neural network (FNN) is the combination of fuzzy logic and a neural network, and possesses the advantages of processing vague information and good learning abilities.…”
Section: Introductionmentioning
confidence: 99%