2019
DOI: 10.3390/w11040742
|View full text |Cite
|
Sign up to set email alerts
|

Dew Point Temperature Estimation: Application of Artificial Intelligence Model Integrated with Nature-Inspired Optimization Algorithms

Abstract: Dew point temperature (DPT) is known to fluctuate in space and time regardless of the climatic zone considered. The accurate estimation of the DPT is highly significant for various applications of hydro and agro–climatological researches. The current research investigated the hybridization of a multilayer perceptron (MLP) neural network with nature-inspired optimization algorithms (i.e., gravitational search (GSA) and firefly (FFA)) to model the DPT of two climatically contrasted (humid and semi-arid) regions … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 77 publications
(28 citation statements)
references
References 51 publications
0
26
0
2
Order By: Relevance
“…Compared to other learning algorithms, such as back propagation (BP), ELM achieves swift learning and performs well in generation function processing [52][53][54][55]. Using ELM in various engineering science fields, such as feature selection [56], classification [57], and regression [51,58], has provided acceptable results.…”
Section: Introductionmentioning
confidence: 99%
“…Compared to other learning algorithms, such as back propagation (BP), ELM achieves swift learning and performs well in generation function processing [52][53][54][55]. Using ELM in various engineering science fields, such as feature selection [56], classification [57], and regression [51,58], has provided acceptable results.…”
Section: Introductionmentioning
confidence: 99%
“…where Od is the output decision, ωj and ω0 represent the connection weights, and h is the hidden layer. The hidden layers are also connected to the output layers through a neural connection which holds the output weights [33,[71][72][73][74][75]. Initially, the weights of the connections hold random values until they intersect another connection-a phase in which they are multiplied by the associated weights and that intersection [34].…”
Section: Multilayer Perceptron Neural Network (Mlp)mentioning
confidence: 99%
“…, where x is the input parameters of the studied problem while y is the output parameter (K s ); both functions are expressed in the HD feature space; e represents the independent random error [50]. For a given dataset, G = {( ), I = 1, 2, …, l}, where l is the training data size, is the input value, and is the output value.…”
Section: Support Vector Regression (Svr)mentioning
confidence: 99%