2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 2020
DOI: 10.1109/micro50266.2020.00058
|View full text |Cite
|
Sign up to set email alerts
|

ConfuciuX: Autonomous Hardware Resource Assignment for DNN Accelerators using Reinforcement Learning

Abstract: DNN accelerators provide efficiency by leveraging reuse of activations/weights/outputs during the DNN computations to reduce data movement from DRAM to the chip. The reuse is captured by the accelerator's dataflow. While there has been significant prior work in exploring and comparing various dataflows, the strategy for assigning on-chip hardware resources (i.e., compute and memory) given a dataflow that can optimize for performance/energy while meeting platform constraints of area/power for DNN(s) of interest… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 73 publications
(66 citation statements)
references
References 82 publications
0
66
0
Order By: Relevance
“…Iterative methods. Examples include simulated annealing [5], evolutionary search [43], Bayesian optimization [32], and reinforcement learning [19]. In this work, we examined several iterative search methods and chose evolutionary search given its high sample efficiency and search quality.…”
Section: Search Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Iterative methods. Examples include simulated annealing [5], evolutionary search [43], Bayesian optimization [32], and reinforcement learning [19]. In this work, we examined several iterative search methods and chose evolutionary search given its high sample efficiency and search quality.…”
Section: Search Methodsmentioning
confidence: 99%
“…There is a plethora of previous works on performance tuning of systolic arrays [4,9,15,17,19,21,28,31,41]. Table 1 lists several recent works.…”
Section: Background and Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…DRAM Access Issues for Current FPGA-based Inference Accelerators: Currently, FPGAs have been widely adopted in edge domains thanks to the well-developed FPGA-based inference accelerators [38,41]. Among the inference accelerators, many works [10,12,38] mainly focused on selecting optimal design parameters to improve the acceleration performance for individual Conv layers. Optimizing techniques such as loop tiling, loop unrolling are adopted by these works.…”
Section: Related Workmentioning
confidence: 99%