2021
DOI: 10.1021/acsomega.1c03851
|View full text |Cite
|
Sign up to set email alerts
|

Kinetic Study of Product Distribution Using Various Data-Driven and Statistical Models for Fischer–Tropsch Synthesis

Abstract: Three modeling techniques, namely, a radial basis function neural network (RBFNN), a comprehensive kinetic with genetic algorithm (CKGA), and a response surface methodology (RSM), were used to study the kinetics of Fischer−Tropsch (FT) synthesis. Using a 29 × 37 (4 independent process parameters as inputs and corresponding 36 responses as outputs) matrix with total 1073 data sets for data training through RBFNN, the established model is capable of predicting hydrocarbon product distribution i.e., the paraffin … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 60 publications
0
1
0
Order By: Relevance
“…In this work, 17 experimental conditions were used as training data set (Table S1). For neuron network construction, the activation functions in the neuron network architecture are one of the most important ones, , which are used to transform input (using the summed weighted approach) into the activation of the node. In this work, the rectified linear activation function (ReLU) was used in the Tensorflow during supervised machine learning (SML) process to minimize the errors caused by the vanishing gradient .…”
Section: Methodsmentioning
confidence: 99%
“…In this work, 17 experimental conditions were used as training data set (Table S1). For neuron network construction, the activation functions in the neuron network architecture are one of the most important ones, , which are used to transform input (using the summed weighted approach) into the activation of the node. In this work, the rectified linear activation function (ReLU) was used in the Tensorflow during supervised machine learning (SML) process to minimize the errors caused by the vanishing gradient .…”
Section: Methodsmentioning
confidence: 99%