2016
DOI: 10.1007/978-3-319-41321-1_19
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging a Cluster-Booster Architecture for Brain-Scale Simulations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2
1

Relationship

7
2

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 10 publications
1
16
0
Order By: Relevance
“…Here, efficient memory access of the accelerator to main memory is critical. Recent developments of the NEURON simulator core (Carnevale and Hines, 2006 ) include optimizations to exploit vectorization support of modern CPUs (Kumbhar et al, 2016 ), leading to a significant reduction of wall clock time for the investigated scenarios. It remains to be shown in how far these optimizations can be efficiently ported to other network models, simulators, and computer architectures.…”
Section: Discussionmentioning
confidence: 99%
“…Here, efficient memory access of the accelerator to main memory is critical. Recent developments of the NEURON simulator core (Carnevale and Hines, 2006 ) include optimizations to exploit vectorization support of modern CPUs (Kumbhar et al, 2016 ), leading to a significant reduction of wall clock time for the investigated scenarios. It remains to be shown in how far these optimizations can be efficiently ported to other network models, simulators, and computer architectures.…”
Section: Discussionmentioning
confidence: 99%
“…The integration interval operations (listed in section 2) consume most of the simulation time (Kumbhar et al, 2016). The goal of CoreNEURON is to efficiently implement these operations considering different hardware architectures.…”
Section: Coreneuron Design and Implementationmentioning
confidence: 99%
“…On the other hand, there have been a number of dramatic and far reaching changes in the processing of the parse tree and C code output as NEURON has evolved to make use of object oriented programming, variable step integrators (CVODE and IDA), threads, different memory layouts, and neural network simulations. In order to improve efficiency and portability on modern architectures like Intel Xeon Phi and NVIDIA GPUs, the core engine of the NEURON simulator is being factored out into the CoreNEURON simulator (Kumbhar et al, 2016 ). This simulator supports all NEURON models written in NMODL and uses a modified variant of the NMODL translator program called mod2c .…”
Section: Tools and Code Generation Pipelinesmentioning
confidence: 99%