Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems 2015
DOI: 10.1145/2694344.2694358
|View full text |Cite
|
Sign up to set email alerts
|

PuDianNao

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 168 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…Storage Constraints: Tens of TBs of storage space is needed to store the prior maps in large environments required by autonomous driving systems to localize the vehicle (e.g., 41 TB for an entire map of the U.S.).…”
Section: Storage Constraintsmentioning
confidence: 99%
“…Storage Constraints: Tens of TBs of storage space is needed to store the prior maps in large environments required by autonomous driving systems to localize the vehicle (e.g., 41 TB for an entire map of the U.S.).…”
Section: Storage Constraintsmentioning
confidence: 99%
“…Liu et al [141] proposed the machine learning accelerator referred to as PuDianNao, that supports multiple machine learning scenarios (e.g., regression, classification, and clustering) as well as many machine learning techniques, including k-means, k-nearest neighbors, linear regression, classification tree, naive bayes, support vector machine, and DNNs. The PuDianNao mainly contains various Functional Units (FUs) and three types of data buffers: ColdBuf, HotBuf, and OutputBuf, an instruction buffer (InstBuf), and a DMA, and a control module, see Fig.…”
Section: A Alu Based Acceleratorsmentioning
confidence: 99%
“…The ShiDianNao accelerator is implemented using 65 nm CMOS technology. DianNao [53], DaDianNao [54], [146], PuDianNao [141], and ShiDianNao [78] are not built utilizing reconfigurable hardware, hence they cannot be adapted to changing application demands such as NN sizes.…”
Section: A Alu Based Acceleratorsmentioning
confidence: 99%
“…In DianNao, the SPM is mainly categorized into two classes, that is, neuron buffer (including the input and output buffer, i.e., NBin and NBout) and SB. Other accelerators in DianNao family, including DaDianNao, 34 PuDianNao, 35 ShiDianNao, 36 and DianNao, 5 also employ the on‐chip SPM for efficiency. In addition to the DianNao family, other NN accelerators also leverage dedicated SPMs as precious resources for improving efficiency.…”
Section: Related Workmentioning
confidence: 99%