2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852476
|View full text |Cite
|
Sign up to set email alerts
|

AX-DBN: An Approximate Computing Framework for the Design of Low-Power Discriminative Deep Belief Networks

Abstract: The power budget for embedded hardware implementations of Deep Learning algorithms can be extremely tight. To address implementation challenges in such domains, new design paradigms, like Approximate Computing, have drawn significant attention. Approximate Computing exploits the innate error-resilience of Deep Learning algorithms, a property that makes them amenable for deployment on low-power computing platforms. This paper describes an Approximate Computing design methodology, AX-DBN, for an architecture bel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
(23 reference statements)
0
2
0
Order By: Relevance
“…In an effort to further reduce processing requirements, some RFML implementations have also embedded traditional signal processing techniques such as Fourier and wavelet transforms, cyclostationary feature estimators, and other expert features directly into the NN [170], [174], [175]. Meanwhile, other research has focused on reduced precision implementations of NNs, enabling a path towards real-time implementation [176]- [178]. However, reducing real-time computational resources to mobile systems remains a challenge that must be overcome, especially if online learning techniques are to be developed for future RFML systems [179], [180].…”
Section: A Size Weight and Power (Swap)mentioning
confidence: 99%
“…In an effort to further reduce processing requirements, some RFML implementations have also embedded traditional signal processing techniques such as Fourier and wavelet transforms, cyclostationary feature estimators, and other expert features directly into the NN [170], [174], [175]. Meanwhile, other research has focused on reduced precision implementations of NNs, enabling a path towards real-time implementation [176]- [178]. However, reducing real-time computational resources to mobile systems remains a challenge that must be overcome, especially if online learning techniques are to be developed for future RFML systems [179], [180].…”
Section: A Size Weight and Power (Swap)mentioning
confidence: 99%
“…Further, some RFML implementations incorporate pre-calculated traditional signal processing techniques such as Fourier and wavelet transforms, cyclostationary feature estimators, and other expert features to serve as a more efficient feature that may be merged with machine learned behaviors [241], [245], [246]. Other research has focused on reduced precision implementations of machine learning structures as a method to gain computational efficiency [247]- [249]. However, the use of online learning techniques in RF scenarios requires real-time computational resources that are currently difficult to reduce to a mobile system [250], [251], in addition to the challenges discussed in Section III.…”
Section: Deploymentmentioning
confidence: 99%