Proceedings of the 2016 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2016
DOI: 10.3850/9783981537079_0819
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Deep Learning for Energy-Efficient and Enhanced Pattern Recognition

Abstract: Deep learning neural networks have emerged as one of the most powerful classification tools for vision related applications. However, the computational and energy requirements associated with such deep nets can be quite high, and hence their energy-efficient implementation is of great interest. Although traditionally the entire network is utilized for the recognition of all inputs, we observe that the classification difficulty varies widely across inputs in real-world datasets; only a small fraction of inputs … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
86
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 118 publications
(86 citation statements)
references
References 15 publications
0
86
0
Order By: Relevance
“…During test time, input instance is passed through each stage to produce a class label. Evaluation of the CDL methodology on the MNIST dataset demonstrates 1.91x reduction in average operations per input [30]. In addition to energy efficiency, we also observe that the CDL network outperforms the baseline DLNN in terms of classification accuracy (97.5% for DLNN and for 98.9% CDL).…”
Section: Conditional Deep Learningmentioning
confidence: 83%
See 1 more Smart Citation
“…During test time, input instance is passed through each stage to produce a class label. Evaluation of the CDL methodology on the MNIST dataset demonstrates 1.91x reduction in average operations per input [30]. In addition to energy efficiency, we also observe that the CDL network outperforms the baseline DLNN in terms of classification accuracy (97.5% for DLNN and for 98.9% CDL).…”
Section: Conditional Deep Learningmentioning
confidence: 83%
“…It is well-known that the convolutional layers (CNN layers) of a DLNN, interpreted as visual layers, learn a hierarchy of features which transition from general (similar to Gabor filters and color blobs [28]) to specific, as we go deeper into the network [29]. We can utilize the generic-to-specific transition in the learnt features [30]. One way of achieving such a goal is to add a linear network of output neurons for each convolutional layer and monitor the output of the linear network to conditionally activate the deeper layers.…”
Section: Conditional Deep Learningmentioning
confidence: 99%
“…Future work will include enhancing the denoised powers-of-two networks with other complexity reduction techniques such as network pruning or conditional execution [34][35][36].…”
Section: Resultsmentioning
confidence: 99%
“…The structure of the network is showed in Fig.1 [13]. There are three convolution layers which followed by liner classifiers.…”
Section: The Cdln Networkmentioning
confidence: 99%
“…In 2016, Panda p et al [13] propose a Conditional Deep Learning Network(CDLN) which adding extra linear classifiers behind the convolution layers. Through monitoring the output of the liner classifiers the ones that are easy to classify is classified in advance and exit the network for fast inference.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%