2017
DOI: 10.1016/j.cherd.2017.07.029
|View full text |Cite
|
Sign up to set email alerts
|

Predicting hydrodynamic parameters and volumetric gas–liquid mass transfer coefficient in an external-loop airlift reactor by support vector regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…The main concept in SVR is mapping input data into higher dimensional feature space, then constructing a kernel function that permits the problem to be solved by linear regression function. SVR has a unique advantage over an artificial neural network (ANN) because it is less prone to overfitting problems due to the fact that its objective function is convex, hence global optimum is often reached [27]. Consequently, SVR results are consistent and reproducible, unlike ANN that may suffer from prediction uncertainties [28].…”
Section: Methodsmentioning
confidence: 99%
“…The main concept in SVR is mapping input data into higher dimensional feature space, then constructing a kernel function that permits the problem to be solved by linear regression function. SVR has a unique advantage over an artificial neural network (ANN) because it is less prone to overfitting problems due to the fact that its objective function is convex, hence global optimum is often reached [27]. Consequently, SVR results are consistent and reproducible, unlike ANN that may suffer from prediction uncertainties [28].…”
Section: Methodsmentioning
confidence: 99%
“…The ANN results, including the weight values, depend on the initial assumptions of parameters necessary for ANN construction and fitting. Similarly, the distinctive number of hidden neurons can give diverse ANN model outcomes (Kojić & Omorjan, ). With a specific end goal to prevent this problem each topology was run several times to avoid random correlation due to initial assumption and random initialization of the weights.…”
Section: Methodsmentioning
confidence: 99%
“…ANN results, including the weight values depend on the initial assumptions of parameters necessary for ANN construction and fitting. 27,28 A series of various topologies were used, in which the number of hidden neurons varied from 10 to 20 and the training process of each network was run 100,000 times with random initial values of weights and biases. The optimization process was performed on the basis of validation error minimization.…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%