2018
DOI: 10.1016/j.patcog.2017.10.013
|View full text |Cite
|
Sign up to set email alerts
|

Recent advances in convolutional neural networks

Abstract: In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Leveraging on the rapid growth in the amount of the annotated data and the great improvements in the strengths of graphics processor units, the research on convolutional neural networks has been emerged swiftly and achieved st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
1,923
0
36

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 4,469 publications
(2,342 citation statements)
references
References 203 publications
(285 reference statements)
2
1,923
0
36
Order By: Relevance
“…However, non-saturating nonlinearities such as the rectified linear unit (ReLU) allow much faster learning than these saturating nonlinearities, particularly for models that are trained on large datasets [18]. Moreover, a number of works have shown that the performance of ReLU is better than that of sigmoid and tanh activation [39]. Thus, most of the modern studies on ConvNets use ReLU to model the output of the neurons [28], [32], [33], [34].…”
Section: Activation Functionmentioning
confidence: 99%
“…However, non-saturating nonlinearities such as the rectified linear unit (ReLU) allow much faster learning than these saturating nonlinearities, particularly for models that are trained on large datasets [18]. Moreover, a number of works have shown that the performance of ReLU is better than that of sigmoid and tanh activation [39]. Thus, most of the modern studies on ConvNets use ReLU to model the output of the neurons [28], [32], [33], [34].…”
Section: Activation Functionmentioning
confidence: 99%
“…To introduce non-linearity, the Rectified Linear Unit (ReLU) activation function was used after each convolution. It has the advantage of being resistant to the vanishing gradient problem while being simple in terms of computation and was shown to work better than sigmoid and hyperbolic tangent activation functions [34]. A square-shaped sliding window is used to scan the text-line image in the direction of the writing.…”
Section: Deep Models Based On Convolutional Recurrent Neural Networkmentioning
confidence: 99%
“…In addition to the symmetries exploited here, a typical length scale, 1=Λ QCD ∼ 10 −15 m, emerges dynamically in LQCD calculations. Consequently, there are potential advantages for a convolutional approach [82][83][84] at larger lattice volumes. Convolutional layers would again have to be customized, respecting the gauge symmetry of the problem.…”
Section: Resultsmentioning
confidence: 99%