2018
DOI: 10.14569/ijacsa.2018.091031
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Neural Network Hyper-Parameters Optimization based on Genetic Algorithms

Abstract: In machine learning for computer vision based applications, Convolutional Neural Network (CNN) is the most widely used technique for image classification. Despite these deep neural networks efficiency, choosing their optimal architecture for a given task remains an open problem. In fact, CNNs performance depends on many hyper-parameters namely CNN depth, convolutional layer number, filters number and their respective sizes. Many CNN structures have been manually designed by researchers and then evaluated to ve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(42 citation statements)
references
References 27 publications
0
34
0
Order By: Relevance
“…We used our algorithm to find optimal structures for the datasets MNIST, CIFAR10, and CALTECH256, running each test 10 times with different seeds for the genetic algorithm. Besides our proposal, we also performed tests using the algorithms proposed by Loussaief et al [29], Sun et al [39], and Bhandare et al [3] for each dataset. For Loussaief et al we were able to execute 10 different runs, while for the rest only one run was performed because of time constraints.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We used our algorithm to find optimal structures for the datasets MNIST, CIFAR10, and CALTECH256, running each test 10 times with different seeds for the genetic algorithm. Besides our proposal, we also performed tests using the algorithms proposed by Loussaief et al [29], Sun et al [39], and Bhandare et al [3] for each dataset. For Loussaief et al we were able to execute 10 different runs, while for the rest only one run was performed because of time constraints.…”
Section: Resultsmentioning
confidence: 99%
“…Sun et al's algorithms yields 99.3% accuracy after 135.42 total hours, while Bhandare's algorithm achieves 99.02% accuracy after 6.4 hours. We can observe the evolution of the validation accuracies of the 10 elites of our proposal through the generations in Fig 6. MNIST CIFAR10 Caltech256 Loussaief [29] 99 The second dataset we worked with is the CIFAR10 dataset. It is composed of 60,000 32x32 RGB images labeled in 10 categories, such as dog, truck, or ship.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The authors improved their results by combining CNN with GA to find the optimal CNN parameters, leading to more accurate image classification. In another image recognition study carried out by Loussaief and Abdelkrim (2018), the numbers of convolutional layers and filters, as well as their sizes in each layer, were optimized using GA. They managed to increase the classification accuracy of the CNN from 90% to >98%.…”
Section: Introductionmentioning
confidence: 99%