2016
DOI: 10.2495/sdp-v11-n4-566-577
|View full text |Cite
|
Sign up to set email alerts
|

Clustering approach applied on an artificial neural network model to predict PM10 in mega cities of Mexico

Abstract: A cluster-based artificial neural network model called CLASO (Classification-Assemblage-Association) has been proposed to predict the maximum of the 24-h moving average of PM 10 concentration on the next day in the three largest metropolitan areas of Mexico. The model is a self-organised, real-time learning neural network, which builds its topology via a process of pattern classification by using an historical database. This process is based on a supervised clustering technique, assigning a class to each centr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…e artificial neuron gets the output of all neurons connected to it, and the signal to be generated is amplified connection strength [7]. e weighted total is compared to the net value of the neuron, and the fake neuron is triggered if it is bigger than the threshold [8,9]. e signal is transferred to the higher-level neurons attached to it when it is triggered.…”
Section: Overview Of Artificial Nnsmentioning
confidence: 99%
“…e artificial neuron gets the output of all neurons connected to it, and the signal to be generated is amplified connection strength [7]. e weighted total is compared to the net value of the neuron, and the fake neuron is triggered if it is bigger than the threshold [8,9]. e signal is transferred to the higher-level neurons attached to it when it is triggered.…”
Section: Overview Of Artificial Nnsmentioning
confidence: 99%
“…The output value after passing the transfer function is between 0 and 1. The ANN has self-adaptive learning, which adjusts all weight values from their error, called the backpropagation algorithm [36]. The stochastic gradient descent (SGD), among the popular weight optimization algorithms, was selected in this research to minimize the loss function, which is an error of the model.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…This step is called the feedforward process. The ANN then learns from its fault by the backpropagation algorithm, which is called the backward process (Magaña-Villegas et al, 2016;Slini et al, 2006). This reverse process applies different weight optimization algorithms, such as the gradient descent, or the Levenberg-Marquardt to update the weights from errors.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…Yildirim & Bayramoglu, 2006;S. Lira et al, 2007;Sfetsos & Vlachogiannis, 2009;Su arez S anchez et al, 2011;Polat & Durduran, 2012;Yetilmezsoy & Abdul-Wahab, 2012;Muñoz et al, 2013;Asha B. Chelani, 2015;Cortina-Januchs et al, 2015;Hamid etal., 2016;Magaña-Villegas et al, 2016;Wongsathan & Chankham, 2016;W. Li et al, 2017;García Nieto et al, 2018a, 2018b.…”
Section: The Other Modelsmentioning
confidence: 99%