2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA) 2016
DOI: 10.1109/icmla.2016.0018
|View full text |Cite
|
Sign up to set email alerts
|

Automated Optimal Architecture of Deep Convolutional Neural Networks for Image Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…The training of a CNN can be varied by adjusting values of parameters such as the number of layers in a neural network, the number of filters in each convolutional layer and the order of layers. At present, there is no established theory to determine the optimal values of these parameters as well as some inherent training parameters such as the learning rate and batch size (Albelwi and Mahmood, 2016). However, these parameters can be tuned in a iterative way to find a near-optimal configuration.…”
Section: Training the Cnn-smmcmentioning
confidence: 99%
“…The training of a CNN can be varied by adjusting values of parameters such as the number of layers in a neural network, the number of filters in each convolutional layer and the order of layers. At present, there is no established theory to determine the optimal values of these parameters as well as some inherent training parameters such as the learning rate and batch size (Albelwi and Mahmood, 2016). However, these parameters can be tuned in a iterative way to find a near-optimal configuration.…”
Section: Training the Cnn-smmcmentioning
confidence: 99%
“…The NMA was used in Convolutional Neural Network (CNN) optimization in [1,2], in conjunction with a relatively small optimization dataset. It works well for objective functions that are smooth, unimodal and not too noisy.…”
Section: Derivative-free Optimization: Nelder-meadmentioning
confidence: 99%
“…The progressive sampling method presented in [40] is an incremental method using progressively larger samples as long as model accuracy improves. Instance selection algorithms for CNN architectures were proposed in [1,2,28].…”
Section: Filter the Selection Criterion Uses A Selection Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, large variations in image quality and appearances may occur due to differences in acquisition, reconstruction or post-processing between different vendors and/or acquisition protocols, and require algorithm retraining to achieve optimal performance. Deep learning approaches, that are rapidly becoming the state of the art for many applications, also require the manual optimization of several hyper-parameters (Greenspan et al, 2016;Albelwi and Mahmood, 2016). Self-optimization of machine learning pipelines is a common problem in computer vision, as well as in the general machine learning and data analytics community (Cerquitelli et al, 2016;Corso et al, 2017).…”
Section: Introductionmentioning
confidence: 99%