2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7966166
|View full text |Cite
|
Sign up to set email alerts
|

Ternary neural networks for resource-efficient AI applications

Abstract: The computation and storage requirements for Deep Neural Networks (DNNs) are usually high. This issue limits their deployability on ubiquitous computing devices such as smart phones, wearables and autonomous drones. In this paper, we propose ternary neural networks (TNNs) in order to make deep learning more resource-efficient. We train these TNNs using a teacher-student approach based on a novel, layer-wise greedy methodology. Thanks to our two-stage training procedure, the teacher network is still able to use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
113
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 158 publications
(113 citation statements)
references
References 11 publications
0
113
0
Order By: Relevance
“…The applicability of these is problematic because they could consume resources with extended complexity when implemented in live systems [11]. For instance, compiling large and effective Neural Networks require considerable processing power and other hardware computing resources [41]. They also require a pre-generated dataset of known-good/known-bad paths and car movements where 2/3 of the dataset will be used to train the algorithm with remaining 1/3 used for evaluation purposes; accuracy, recall, F1-measure, precision and False-Positive-Rate (FPP) are usually reported as a mean to measure the quality of the tested classifier.…”
Section: Resultsmentioning
confidence: 99%
“…The applicability of these is problematic because they could consume resources with extended complexity when implemented in live systems [11]. For instance, compiling large and effective Neural Networks require considerable processing power and other hardware computing resources [41]. They also require a pre-generated dataset of known-good/known-bad paths and car movements where 2/3 of the dataset will be used to train the algorithm with remaining 1/3 used for evaluation purposes; accuracy, recall, F1-measure, precision and False-Positive-Rate (FPP) are usually reported as a mean to measure the quality of the tested classifier.…”
Section: Resultsmentioning
confidence: 99%
“…For the MNIST dataset, we achieve an FPS which is over 48/6× over the nearest highest throughput design [1] for our SFC-max/LFC-max designs respectively. While our SFC-max design has lower accuracy than the networks implemented by Alemdar et al [1] our LFC-max design outperforms their nearest accuracy design by over 6/1.9× for throughput and FPS/W respectively.…”
Section: Comparison To Prior Workmentioning
confidence: 91%
“…For the MNIST dataset, we achieve an FPS which is over 48/6× over the nearest highest throughput design [1] for our SFC-max/LFC-max designs respectively. While our SFC-max design has lower accuracy than the networks implemented by Alemdar et al [1] our LFC-max design outperforms their nearest accuracy design by over 6/1.9× for throughput and FPS/W respectively. For other datasets, our CNV-max design outperforms TrueNorth [6] for FPS by over 17/8× for CIFAR-10 / SVHN datasets respectively, while achieving 9.44× higher throughput than the design by Ovtcharov et al [19], and 2.2× over the fastest results reported by Hegde et al [9].…”
Section: Comparison To Prior Workmentioning
confidence: 91%
See 2 more Smart Citations