2014 19th Asia and South Pacific Design Automation Conference (ASP-DAC) 2014
DOI: 10.1109/aspdac.2014.6742953
|View full text |Cite
|
Sign up to set email alerts
|

A scalable custom simulation machine for the Bayesian Confidence Propagation Neural Network model of the brain

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 15 publications
0
17
0
Order By: Relevance
“…by introducing cache memory close to the CPU or by using general purpose GPUs, are not viable, if energy consumption is factored in [36]. We then described dedicated FPGA and full custom ASIC architectures that carefully balance the use of memory and information processing resources for implementing deep networks [42], [57], [40] or large-scale computational neuroscience models [67]. While these dedicated architectures, still based on frames or graded (non-spiking) neural network models, represent an improvement over CPU and GPU approaches, the event-based architectures described in Sections II-B1 and II-C improve access to cache memory structures even further, because of their better use of locality in both space and time.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…by introducing cache memory close to the CPU or by using general purpose GPUs, are not viable, if energy consumption is factored in [36]. We then described dedicated FPGA and full custom ASIC architectures that carefully balance the use of memory and information processing resources for implementing deep networks [42], [57], [40] or large-scale computational neuroscience models [67]. While these dedicated architectures, still based on frames or graded (non-spiking) neural network models, represent an improvement over CPU and GPU approaches, the event-based architectures described in Sections II-B1 and II-C improve access to cache memory structures even further, because of their better use of locality in both space and time.…”
Section: Discussionmentioning
confidence: 99%
“…However, these devices, developed to implement conventional logic architectures with small numbers of input (fan-in) and output (fan-out) ports, do not allow designers to make dramatic changes to the system's memory structure, leaving the von Neumann bottleneck problem largely unsolved. The next level of complexity in the quest of implementing brain-like neural information processing systems is to design custom ASICs using a standard digital design flow [67]. Further customization can be done by combining standard design digital design flow for the processing elements, and custom asynchronous routing circuits for the communication infrastructure.…”
Section: Large Scale Models Of Neural Systemsmentioning
confidence: 99%
“…• DRAM Power Model: Since DRAMs contribute significantly to the power consumption of today's systems [19,9], there is a need for accurate power modelling. For our framework we use DRAMPower [6,5], which uses either parameters from datasheets, estimated via DRAMSpec [25] or measurements to model DRAM power.…”
Section: Simulation Framework For Approx Drammentioning
confidence: 99%
“…For instance, the authors of [4] show in a power breakdown of a recent smartphone that DRAM contributes around 17% to the total system power. Moreover, there are applications, such as the GreenWave computing platform [29], in which 49% of the total power consumption has to be attributed to DRAMs, and even 80% for a system that imitates the human cortex based on an Application Specific Integrated Circuit (ASIC), as shown in [13]. In fact, the energy consumed per bit for accessing off-chip DRAM is two to three orders of magnitude higher than the energy required for on-chip memory accesses [27].…”
Section: Introductionmentioning
confidence: 99%