2021
DOI: 10.1016/j.fluid.2021.113179
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of pure component parameters of PC-SAFT EoS by an artificial neural network based on a group contribution method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(30 citation statements)
references
References 51 publications
1
29
0
Order By: Relevance
“…Increasing the number of layers from one to two generally improves the prediction accuracy. , Consequently, testing the addition of a second hidden layer for predicting T g and T m of PHA polymers is essential for the model’s success. Therefore, the total number of neurons in both layers was varied between 10 and 50, with a minimum of 5-5 and a maximum of 25-25.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Increasing the number of layers from one to two generally improves the prediction accuracy. , Consequently, testing the addition of a second hidden layer for predicting T g and T m of PHA polymers is essential for the model’s success. Therefore, the total number of neurons in both layers was varied between 10 and 50, with a minimum of 5-5 and a maximum of 25-25.…”
Section: Resultsmentioning
confidence: 99%
“…86,87 Increasing the number of layers provides a shortcut to increasing the capacity of the model with fewer resources; generally, two hidden layers are sufficient for most problems pertaining to a single output response. 88 A minimal number of neurons may lead to an under-fitted model, which will have limited training and testing data accuracy. On the other hand, a model with a high number of neurons will be over-fitted, resulting in a high training accuracy with a low testing performance.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…S3 (ESI †). [62][63][64][65] The LevenbergÀMarquardt method was used in the optimization algorithm with the learning rate set to 0.01. In the ANN optimization process, the mean square error (MSE) was employed as the loss function; it is described by eqn (1): 66…”
Section: Methodsmentioning
confidence: 99%