2016
DOI: 10.1155/2016/1537325
|View full text |Cite
|
Sign up to set email alerts
|

Metaheuristic Algorithms for Convolution Neural Network

Abstract: A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligenc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 83 publications
(44 citation statements)
references
References 16 publications
0
41
0
Order By: Relevance
“…The cuckoo search and bat algorithm algorithms were used in [56] to adjust the weights of neural networks of feed-forward type. In [57] an algorithm was designed using metaheuristics to optimize convolutional neural networks. An application to long term short memory (LSTM) neural network training was studied in [58].…”
Section: Hybridizing Metaheuristics With Machine Learningmentioning
confidence: 99%
“…The cuckoo search and bat algorithm algorithms were used in [56] to adjust the weights of neural networks of feed-forward type. In [57] an algorithm was designed using metaheuristics to optimize convolutional neural networks. An application to long term short memory (LSTM) neural network training was studied in [58].…”
Section: Hybridizing Metaheuristics With Machine Learningmentioning
confidence: 99%
“…Likely the most well-known algorithm in the field of neuroevolution is the neuro-evolution of augmented topologies algorithm (NEAT) [13]. Recent studies have applied evolutionary strategies to deep neural networks in a variety of manners, such as evolving network weights via genetic programming [9,15], differential evolution [11], pitting networks against each other in a tournament selection environment [10], more recent extensions of NEAT [3,7], and by evolving a network's activation functions [4]. The general "mood" of the intersection of these two fields is very much an exploratory one.…”
Section: Related Workmentioning
confidence: 99%
“…However, learning such layers under conditions of a limited size of training datasets and computing resources based on the error backpropagation and the stochastic gradient descent is ineffective. One of the promising ways of training neural networks and fine tuning is the application of metaheuristic algorithms because they are characterized by better convergence and less probability of getting stuck in the "bad" local optimum [17]. Among them, it is worth highlighting the simulated annealing algorithm, since its use made it possible to exceed the results of the gradient descent algorithms in optimization of neural network classifiers [4].…”
Section: Literature Review and Problem Statementmentioning
confidence: 99%