1999
DOI: 10.1007/3-540-46805-6_19
|View full text |Cite
|
Sign up to set email alerts
|

Object Recognition with Gradient-Based Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
352
0
6

Year Published

2004
2004
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 741 publications
(394 citation statements)
references
References 18 publications
0
352
0
6
Order By: Relevance
“…On the learning side, methods for classifying the feature space have ranged from simple nearest neighbor schemes to more complex approaches such as neural networks [8], convolutional neural networks [17], probabilistic methods [11], [18] and linear or higher degree polynomial classifiers [13], [16].…”
Section: A Related Workmentioning
confidence: 99%
“…On the learning side, methods for classifying the feature space have ranged from simple nearest neighbor schemes to more complex approaches such as neural networks [8], convolutional neural networks [17], probabilistic methods [11], [18] and linear or higher degree polynomial classifiers [13], [16].…”
Section: A Related Workmentioning
confidence: 99%
“…Our experiments used the LeNet5 style architecture (LeCun et al 1999), which has already been successfully used for OCR. The main difference between our net, shown in Figure 1, and LeNet5 is that our last layer is a combination of LeNet5's F6 and Output layer.…”
Section: Methodsmentioning
confidence: 99%
“…This method is employed by Deep Convolutional Neural Nets (DCNN's) (LeCun et al 1999). Although these nets are usually referred to as just Convolutional Neural Nets, we refer to them as Deep Convolutional Neural Nets, in order to equally emphasize the use of a deep architecture as well as locally receptive fields.…”
Section: Background and Related Workmentioning
confidence: 99%
“…In the case of SVMs with linear kernel, they showed that for a very large scale problem, widely used SVM packages, including LIBSVM (Chang and Lin 2001) and SVMlight (Joachims 1998(Joachims , 2006, took hours to converge, but SGD only took less than 10 seconds. In fact, about ten years ago, LeCun et al (1998b) and LeCun et al (1999) already showed that SGD is feasible for training multiple-layered hierarchical models for large scale hand-written digit recognition and object recognition tasks. Their results lead the machine learning community re-consider SGD as a useful optimization algorithm and sheds new light on large scale structured prediction.…”
Section: Related Workmentioning
confidence: 99%