2018
DOI: 10.1016/j.neucom.2017.12.049
|View full text |Cite
|
Sign up to set email alerts
|

Evolutionary convolutional neural networks: An application to handwriting recognition

Abstract: Convolutional neural networks (CNNs) have been used over the past years to solve many different artificial intelligence (AI) problems, providing significant advances in some domains and leading to state-of-the-art results. However, the topologies of CNNs involve many different parameters, and in most cases, their design remains a manual process that involves effort and a significant amount of trial and error. In this work, we have explored the application of neuroevolution to the automatic design of CNN topolo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
73
0
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 148 publications
(84 citation statements)
references
References 36 publications
(44 reference statements)
0
73
0
2
Order By: Relevance
“…Also, DEvol [62], which uses genetic programming, has obtained a error rate of 0.6%. Baldominos et al [63] presented a work in 2018 where the topology of the network is evolved using grammatical evolution, attaining a test error rate of 0.37% without data augmentation and this result was later improved by means of the neuroevolution of committees of CNNs [64] down to 0.28%. Similar approaches of evolving a committee of CNNs were presented by Bochinski et al [65], achieving a very competitive test error rate of 0.24%; and by Baldominos et al [66], where the models comprising the committee were evolved using a genetic algorithm, reporting a test error rate of 0.25%.…”
Section: State Of the Artmentioning
confidence: 99%
See 1 more Smart Citation
“…Also, DEvol [62], which uses genetic programming, has obtained a error rate of 0.6%. Baldominos et al [63] presented a work in 2018 where the topology of the network is evolved using grammatical evolution, attaining a test error rate of 0.37% without data augmentation and this result was later improved by means of the neuroevolution of committees of CNNs [64] down to 0.28%. Similar approaches of evolving a committee of CNNs were presented by Bochinski et al [65], achieving a very competitive test error rate of 0.24%; and by Baldominos et al [66], where the models comprising the committee were evolved using a genetic algorithm, reporting a test error rate of 0.25%.…”
Section: State Of the Artmentioning
confidence: 99%
“…MetaQNN (ensemble) [61] 0.32% Fractional max-pooling CNN with random overlapping [34] 0.32% CNN with competitive multi-scale conv. filters [33] 0.33% CNN neuroevolved with GE [63] 0.37% Fast-learning shallow CNN [35] 0.37% CNN FitNet with LSUV initialization and SVM [59] 0.38% Deeply supervised CNN [27] 0.39% Convolutional kernel networks [36] 0.39% CNN with Multi-loss regularization [37] 0.42% MetaQNN [61] 0.44% CNN (3 conv maxout, 1 dense) with dropout [17] 0.45% Convolutional highway networks [38] 0.45% CNN (5 conv, 3 dense) with retraining [45] 0.46% Network-in-network [39] 0.47% CNN (3 conv, 1 dense), stochastic pooling [25] 0.49% CNN (2 conv, 1 dense, relu) with dropout [24] 0.52% CNN, unsup pretraining [17] 0.53% CNN (2 conv, 1 dense, relu) with DropConnect [24] 0.57% SparseNet + SVM [15] 0.59% CNN (2 conv, 1 dense), unsup pretraining [16] 0.60% DEvol [62] 0.60% CNN (2 conv, 2 dense) [40] 0.62% Boosted Gabor CNN [42] 0.68% CNN (2 conv, 1 dense) with L-BFGS [43] 0.69% Fastfood 1024/2048 CNN [44] 0.71% Feature Extractor + SVM [14] 0.83% Dual-hidden Layer Feedforward Network [21] 0.87% CNN LeNet-5 [4] 0.95%…”
Section: Technique Test Error Ratementioning
confidence: 99%
“…Different CNN structures have been developed to solve image processing, pattern recognition, classification, and other problems. Recently, CNNs have been used for facial recognition (Lawrence, Giles, Ah Chung, & Back, 1997), handwritten character classification (Ciresan, Meier, Gambardella, & Schmidhuber, 2011), visual document analysis (Simard, Steinkraus, & Platt, 2003), face sketch synthesis (Jiao, Zhang, Li, Liu, & Ma, 2018), microaneurysm detection (Chudzik, Majumdar, Calivá, Al-Diri, & Hunter, 2018), fingerprint enhancement (Li, Feng, & Kuo, 2018), the segmentation of glioma tumours in brains (Hussain, Anwar, & Majid, 2018), handwriting recognition (Baldominos, Saez, & Isasi, 2018), granite tile classification (Ferreira & Giraldi, 2017), segmenting the neuroanatomy (Wachinger, Reuter, & Klein, 2018), change detection using heterogeneous optical and radar images (Liu, Gong, Qin, & Zhang, 2018), predicting eye fixations (Liu, Han, Liu, & Li, 2018), improving acoustic source localization in noisy and reverberant conditions (Salvati, Drioli, & Foresti, 2018), chest disease detection (Abiyev & Ma'aitah, 2018), short-term wind speed forecasting (Khodayar, Kaynak, & Khodayar, 2017), natural language processing (Kalchbrenner, Grefenstette, & Blunsom, 2014), and image and video recognition problems (Karpathy et al, 2014) and showed good results.…”
mentioning
confidence: 99%
“…In this work, we will first use a previously described evolutionary algorithm [3] to optimize the topology of convolutional neural networks. This procedure includes mechanisms specifically devoted to preserving the diversity of the population during evolution.…”
Section: Introductionmentioning
confidence: 99%