2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) 2015
DOI: 10.1109/cgo.2015.7054193
|View full text |Cite
|
Sign up to set email alerts
|

EMEURO: A framework for generating multi-purpose accelerators via deep learning

Abstract: Approximate computing is a very promising design paradigm for crossing the CPU power wall, primarily driven by the potential to sacrifice output quality for significant gains in performance, energy, and fault tolerance. Unfortunately, existing solutions have primarily either focused on new programming models, or new hardware designs, leaving significant room between these two ends for software-based optimizations. To fill this void, additional efforts should target the compilation and runtime stages, which hav… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 35 publications
0
8
0
Order By: Relevance
“…Recent work has explored a variety of approximation techniques that include: (a) approximate storage designs [38,39] that trades quality of data for reduced energy [38] and longer lifetime [39], (b) voltage overscaling [28,40,41], (c) loop perforation [30,42,43], (d) loop early termination [29], (e) computation substitution [6,9,29,44], (f) memoization [7,8,45], (g) limited fault recovery [42,[46][47][48][49][50], (h) precision scaling [16,51], (i) approximate circuit synthesis [19,[52][53][54][55][56][57], and (j) neural acceleration [10][11][12][13][14][15].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent work has explored a variety of approximation techniques that include: (a) approximate storage designs [38,39] that trades quality of data for reduced energy [38] and longer lifetime [39], (b) voltage overscaling [28,40,41], (c) loop perforation [30,42,43], (d) loop early termination [29], (e) computation substitution [6,9,29,44], (f) memoization [7,8,45], (g) limited fault recovery [42,[46][47][48][49][50], (h) precision scaling [16,51], (i) approximate circuit synthesis [19,[52][53][54][55][56][57], and (j) neural acceleration [10][11][12][13][14][15].…”
Section: Related Workmentioning
confidence: 99%
“…This characteristic of many GPU applications provides a unique opportunity to devise approximation techniques that trade small losses in the quality of results for significant gains in performance and efficiency. Among approximation techniques, neural acceleration provides significant gains for CPUs [10][11][12][13][14] and may be a good candidate for GPUs. Neural acceleration relies on an automated algorithmic transformation that converts an approximable segment of code 1 to a neural network.…”
Section: Introductionmentioning
confidence: 99%
“…Neural network-based accelerators: McAffee and Olukotun [5] developed EMEURO which is a neuralnetwork emulation and acceleration platform. With small approximation error rate, EMEURO can achieve considerable speedup in various applications within the image processing domain.…”
Section: Introductionmentioning
confidence: 99%
“…These frameworks delegate the control of approximate code execution to the programmer. Emeuro [60] efficiently breaks down an application into subroutines of varying granularity and automatically generates approximate alternatives for said subroutines through the use of Artificial Neural Networks (ANNs). At execution time, an intelligent runtime system explores the high-dimension subroutine space and generates a graph of computations which comprises nodes that are either accurate versions of the subroutines approximate ones through the use of ANNs.…”
Section: Approximate Computingmentioning
confidence: 99%