Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019
DOI: 10.1609/aaai.v33i01.33015369
|View full text |Cite
|
Sign up to set email alerts
|

Universal Approximation Property and Equivalence of Stochastic Computing-Based Neural Networks and Binary Neural Networks

Abstract: Large-scale deep neural networks are both memory and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated. Specific forms of binary neural networks (BNNs) and stochastic computing-based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely with binary operations.Despite the obvious advantages in hardware implementation, thes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 27 publications
(37 reference statements)
0
5
0
Order By: Relevance
“…Differently to that, the authors in [22] focused on stochastic-computing based NNs and show UA properties in almost sure sense. The work of [7] considers UA properties of quantized ReLU networks for locally integratable functions on the Sobolev space.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Differently to that, the authors in [22] focused on stochastic-computing based NNs and show UA properties in almost sure sense. The work of [7] considers UA properties of quantized ReLU networks for locally integratable functions on the Sobolev space.…”
Section: Related Workmentioning
confidence: 99%
“…Modifications and extensions have been established in [11] or [1], for example. UA properties to quantized NNs have been considered in [20], [7] and [22], for example. The first work focused on the uniform approximation capabilities of QNNs for Lipschitz-continuous functions.…”
Section: Related Workmentioning
confidence: 99%
“…Compared with precise computing applications [3], SC is more suitable for approximate computing where DNN is an example. Moreover, it has been proved recently on the equivalence between SC-based DNN and binary neural networks (BNN) [50]. As the latter originates from the deep learning society [11] and many accuracy enhancement techniques have been developed, these advances can be migrated to enhance the accuracy in SC-based DNNs to be close to the software-based, floating point accuracy levels.…”
Section: Motivation and Challenges Of The Proposed Workmentioning
confidence: 99%
“…This is because the final classification result depends on the relative score/logit values of different classes, instead of absolute values. Recent work [35] [3] have pointed out the suitability of SC for DNN acceleration, and [50] has further proved the equivalence between SC-based DNN and binary neural networks, where the latter originate from deep learning society [11]. All the above discussions suggest the potential to build SC-based DNN acceleration using AQFP technology.…”
Section: Introductionmentioning
confidence: 99%
“…Impressive performance has been achieved with quantized networks, for example, on object detection [44] and natural language processing [43] tasks. The theoretical underpinnings of quantized neural networks, such as when and why their performance remains reasonably well, have been actively studied [3,13,22,41,46].…”
Section: Introductionmentioning
confidence: 99%