2020
DOI: 10.1109/tc.2019.2949042
|View full text |Cite
|
Sign up to set email alerts
|

ApGAN: Approximate GAN for Robust Low Energy Learning From Imprecise Components

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Roohi et al [45] Proposes approximate GAN architecture to get optimization in both hardware and software using quantization to reduce computation. The approximation is also applied for deconv layers to avoid zeros for fractional strides.…”
Section: State-of-the-art Hardware Architectures For Generative Adver...mentioning
confidence: 99%
See 1 more Smart Citation
“…Roohi et al [45] Proposes approximate GAN architecture to get optimization in both hardware and software using quantization to reduce computation. The approximation is also applied for deconv layers to avoid zeros for fractional strides.…”
Section: State-of-the-art Hardware Architectures For Generative Adver...mentioning
confidence: 99%
“…The non-volatile memory technique exerts interesting features such as compatibility and high density integration. Roohi et al proposed a processing-in-memory based GAN architecture ApGAN or approximate generative adversarial network for optimization in both software and hardware perspective [45]. In ApGAN, the architecture is designed for resource-limited environment, where both the hardware and software are optimized in order to minimize the overhead.…”
Section: State-of-the-art Hardware Architectures For Generative Adver...mentioning
confidence: 99%
“…In [17], the memory array has been modified, including up to 4 memristors arranged in parallel in the same cell, in order to have multiple resistance values and so higher precision weights. Based on a similar approach to [12], a GAN training accelerator has been discussed in [18] which is able to efficiently perform approximated add/sub operations in a memristor array, achieving both speed-up and high energy efficiency. 3.…”
Section: A Quick Overviewmentioning
confidence: 99%
“…Some of them are considering the binary approximations, choosing an implementation based on emerging technologies. Some works [12,13,26,27] are based on MTJ technology while [15][16][17][18]28,29] have used RRAM. In each of these works the resistive element is used to perform simple logical operations based on current sensing technique.…”
Section: Nn Implementations Based On Lim Conceptmentioning
confidence: 99%
“…In the last two decades, Processing-in-Memory (PIM) architecture, as a potentially viable way to solve the memory wall challenge, has been well explored for different applications [5], [6], [7], [8], [9], [10], [11]. Especially processing-in-non-volatile memory architecture has achieved remarkable success by dramatically reducing data transfer energy and latency [12], [13], [14], [15], [16]. The key concept behind PIM is to realize logic computation within memory to process data by leveraging the inherent parallel computing mechanism and exploiting large internal memory bandwidth.…”
Section: Introductionmentioning
confidence: 99%