2013
DOI: 10.1016/j.jhydrol.2013.02.040
|View full text |Cite
|
Sign up to set email alerts
|

Databased comparison of Sparse Bayesian Learning and Multiple Linear Regression for statistical downscaling of low flow indices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 53 publications
0
9
0
Order By: Relevance
“…The SVM regression model is calibrated with monthly streamflow and re-analysis data for two locations in Australia. An SVM has been proven to be effective in downscaling streamflow (Ghosh and Mujumdar 2008, Joshi et al 2013, Sachindra et al 2013. Following Eghdamirad et al (2017), the SVM is used via the package "e1071" in the R computing platform using a radial basis function (RBF) as the kernel function.…”
Section: Svm Model Set-upmentioning
confidence: 99%
“…The SVM regression model is calibrated with monthly streamflow and re-analysis data for two locations in Australia. An SVM has been proven to be effective in downscaling streamflow (Ghosh and Mujumdar 2008, Joshi et al 2013, Sachindra et al 2013. Following Eghdamirad et al (2017), the SVM is used via the package "e1071" in the R computing platform using a radial basis function (RBF) as the kernel function.…”
Section: Svm Model Set-upmentioning
confidence: 99%
“…The variables selected as predictors for direct downscaling and the comparison of the downscaling efficiencies of DD1 and DD2 are presented in detail in Joshi et al (2013). With respect to the correlation between climate variables and selected low flow indices (Table 3), it was observed that the indices were primarily influenced by wind components (Vertical, Zonal and Meridonal) and humidity variables (Specific and Relative humidity).…”
Section: Intercomparison Of Direct Downscaling Approaches (Dd1 and Dd2)mentioning
confidence: 99%
“…The details of the results of DD have been shown in Joshi et al (2013) and are therefore briefly presented in the results section. The current study discusses the results of ID methods and their comparison with DD methods.…”
Section: Introductionmentioning
confidence: 99%
“…There were no clear guidelines for selecting calibration and validation periods. In general, the training data accounted for 70–80% of the time series and the rest was used as the testing subset (Tisseuil et al ., 2010; Joshi et al ., 2013; Tofiq and Guven, 2014; Okkan and Inan, 2015). Das and Nanduri (2018), however, used 90% of the data as the training subset in order to maximize the number of data available for training the nonlinear models.…”
Section: Introductionmentioning
confidence: 99%