2016 International Conference on Advanced Computer Science and Information Systems (ICACSIS) 2016
DOI: 10.1109/icacsis.2016.7872787
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of convolutional neural network using microcanonical annealing algorithm

Abstract: Convolutional neural network (CNN) is one of the most prominent architectures and algorithm in Deep Learning. It shows a remarkable improvement in the recognition and classification of objects. This method has also been proven to be very effective in a variety of computer vision and machine learning problems. As in other deep learning, however, training the CNN is interesting yet challenging. Recently, some metaheuristic algorithms have been used to optimize CNN using Genetic Algorithm, Particle Swarm Optimiza… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0
5

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(27 citation statements)
references
References 14 publications
0
22
0
5
Order By: Relevance
“…After 8800 iterations, the test error of CNN, CNN with L 2 and PCNN reaches 2.84%, 2.77%, and 2.65%,respectively. [28] 99.22 Graph − CNN [29] 99.14 CNN MA [30] 98.75 MCNN − DS [31] 98.43…”
Section: Mnistmentioning
confidence: 99%
“…After 8800 iterations, the test error of CNN, CNN with L 2 and PCNN reaches 2.84%, 2.77%, and 2.65%,respectively. [28] 99.22 Graph − CNN [29] 99.14 CNN MA [30] 98.75 MCNN − DS [31] 98.43…”
Section: Mnistmentioning
confidence: 99%
“…Therefore, to use the principles of growing neural gas is promising approach to unsupervised learning of high-level layers and to determine automatically the necessary number of neurons. In this case, the output layers of detector model require fine-tuning, which is typically implemented as one of modifications of the error backpropagation algorithm [2,3]. However, this algorithm is characterized by a low convergence rate and getting stuck in local minima of loss function.…”
Section: Abbreviationsmentioning
confidence: 99%
“…However, this algorithm is characterized by a low convergence rate and getting stuck in local minima of loss function. There are alternative metaheuristic search optimization algorithms, however, effectiveness of using these algorithms in problems of networks fine tuning is scantily explored [3].…”
Section: Abbreviationsmentioning
confidence: 99%
See 2 more Smart Citations