2020
DOI: 10.1088/1742-6596/1641/1/012034
|View full text |Cite
|
Sign up to set email alerts
|

Neural network parameters optimization with genetic algorithm to improve liver disease estimation

Abstract: Liver disease is an important public health problem. Over the past view decades machine learning has develop rapidly, and it has been introduced for application in medical-related fields. In this study we use neural network method to solve regression task of liver disorder dataset. Genetic algorithm applied for optimize NN parameters to improve the estimation performance value. NN-GA performance results show the most superior value compared to another methods.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…10. H Harafani, et al [73] Suggested the use of a genetic algorithm for optimizing hyperparameters of NN for predicting liver disease , rather than manually optimizing hyperparameters of NN. They focused on two hyperparameters, momentum coefficient and learning rate, for improving estimation results, and the RMSE was utilized as a result assessor.…”
Section: Applications Used Hyperparameters Optimization Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…10. H Harafani, et al [73] Suggested the use of a genetic algorithm for optimizing hyperparameters of NN for predicting liver disease , rather than manually optimizing hyperparameters of NN. They focused on two hyperparameters, momentum coefficient and learning rate, for improving estimation results, and the RMSE was utilized as a result assessor.…”
Section: Applications Used Hyperparameters Optimization Algorithmsmentioning
confidence: 99%
“…Lastly in [77] [72] they overcome the overfitting without relying on the hyperparameter in [77] use early stopping technique and in another one fit the neuron number manually. The second problem accurse is filling in local minimum the most active hyperparameter in this case is momentum coefficient just like problem that showed in [73]. In [4], [78], [81], [83] the batch size hyper parameter was improved since it represents the number of the samples that have been processed prior to updating the model and the number of the complete passes through the whole training dataset in cases of a large dataset.…”
mentioning
confidence: 99%