2016
DOI: 10.1016/j.compag.2016.05.017
|View full text |Cite
|
Sign up to set email alerts
|

Comparative analysis of reference evapotranspiration equations modelling by extreme learning machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
27
0
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(32 citation statements)
references
References 51 publications
1
27
0
2
Order By: Relevance
“…However, the equation is relatively complex and requires many meteorological data including maximum and minimum air temperature, maximum and minimum relative humidity, wind speed at 2 m height and solar radiation. A serious problem is that the numerous required data may lack accuracy or may not even be available at many stations (Allen et al, 1998;Feng et al, 2016;Feng et al, 2017;Gocic et al, 2016;Landeras et al, 2008;Shiri et al, 2012;Tabari and Talaee, 2011). To simplify the process of estimating ET 0 , many empirical temperature-or radiationbased methods, such as the McCloud, Priestley-Taylor, Blaney-Criddle and Hargreaves-Samani (HS) methods, can be applied for ET 0 calculation (Hargreaves and Allen, 2003).…”
Section: Introductionmentioning
confidence: 99%
“…However, the equation is relatively complex and requires many meteorological data including maximum and minimum air temperature, maximum and minimum relative humidity, wind speed at 2 m height and solar radiation. A serious problem is that the numerous required data may lack accuracy or may not even be available at many stations (Allen et al, 1998;Feng et al, 2016;Feng et al, 2017;Gocic et al, 2016;Landeras et al, 2008;Shiri et al, 2012;Tabari and Talaee, 2011). To simplify the process of estimating ET 0 , many empirical temperature-or radiationbased methods, such as the McCloud, Priestley-Taylor, Blaney-Criddle and Hargreaves-Samani (HS) methods, can be applied for ET 0 calculation (Hargreaves and Allen, 2003).…”
Section: Introductionmentioning
confidence: 99%
“…There are three layers in the ELM model, including the input layer, hidden layer, and output layer, and one of the significant factors is that the hidden layer does not need to be tuned. ELM randomly selects the input weights and then analytically determines the output weights of the SLFNs [40][41][42][43].…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…Hence, the ELM model is able randomly to screen the input weights, and then analytically determine the output weights (e.g., feature-analyzed property) of the SLFNNs. Detailed descriptions of the ELM can be found in Huang et al [74], Gocic et al [49], Abdullah et al [5], and Patil and Deka [33].…”
Section: Extreme-learning Machinementioning
confidence: 99%