2023
DOI: 10.1021/acs.iecr.3c02083
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of Dynamic Viscosity and Sensitivity Study of Potassium Amino-Acid Salt Aqueous Solutions by an Artificial Neural Network According to the Structure

Arnaud Delanney,
Alain Ledoux,
Lionel Estel

Abstract: Aqueous solutions of potassium amino acid salts show promise for capturing carbon dioxide. Accurately predicting their viscosity is fundamental in the design of new processes. Indeed, high viscosity leads to low mass transfer kinetics and significant pressure drops. The higher the viscosity, the larger the contactor's size, and accurate correlation can be useful for contactor design. Moreover, knowing the influence of different groups of molecules on viscosity can help select the best molecule. Nowadays, the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 47 publications
0
0
0
Order By: Relevance
“…Improving the accuracy of the training algorithm for this network can reduce the mean square error and make it as small as possible . The result output of MLP is shown in eq . γ j k = F k true( prefix∑ i = 1 N k 1 w i j k γ i false( k 1 false) + β j k true) where denotes the contribution value of neuron j in layer K and the bias weight of neuron j in layer K are γ jk and β jk , respectively, and W ijk denotes the connection weight …”
Section: Deep Learning Methods Selection and Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Improving the accuracy of the training algorithm for this network can reduce the mean square error and make it as small as possible . The result output of MLP is shown in eq . γ j k = F k true( prefix∑ i = 1 N k 1 w i j k γ i false( k 1 false) + β j k true) where denotes the contribution value of neuron j in layer K and the bias weight of neuron j in layer K are γ jk and β jk , respectively, and W ijk denotes the connection weight …”
Section: Deep Learning Methods Selection and Optimizationmentioning
confidence: 99%
“…where denotes the contribution value of neuron j in layer K and the bias weight of neuron j in layer K are γ jk and β jk , respectively, and W ijk denotes the connection weight. 31 The structure of MLP neural network is shown in Figure 8. To ensure the accuracy of the ANN results, a trusted database is created.…”
Section: Multilayer Perceptron (Mlp)mentioning
confidence: 99%