Network oscillations of different frequencies, durations and amplitudes are hypothesized to coordinate information processing and transfer across brain areas. Among these oscillations, hippocampal sharp wave-ripple complexes (SPW-Rs) are one of the most prominent. SPW-Rs occurring in the hippocampus are suggested to play essential roles in memory consolidation as well as information transfer to the neocortex. To-date, most of the knowledge about SPW-Rs comes from experimental studies averaging responses from neuronal populations monitored by conventional microelectrodes. In this work, we investigate spatiotemporal characteristics of SPW-Rs and how microelectrode size and distance influence SPW-R recordings using a biophysical model of hippocampus. We also explore contributions from neuronal spikes and synaptic potentials to SPW-Rs based on two different types of network activity. Our study suggests that neuronal spikes from pyramidal cells contribute significantly to ripples while high amplitude sharp waves mainly arise from synaptic activity. Our simulations on spatial reach of SPW-Rs show that the amplitudes of sharp waves and ripples exhibit a steep decrease with distance from the network and this effect is more prominent for smaller area electrodes. Furthermore, the amplitude of the signal decreases strongly with increasing electrode surface area as a result of averaging. The relative decrease is more pronounced when the recording electrode is closer to the source of the activity. Through simulations of field potentials across a high-density microelectrode array, we demonstrate the importance of finding the ideal spatial resolution for capturing SPW-Rs with great sensitivity. Our work provides insights on contributions from spikes and synaptic potentials to SPW-Rs and describes the effect of measurement configuration on LFPs to guide experimental studies towards improved SPW-R recordings.
As the machine learning and systems communities strive to achieve higher energy-efficiency through custom deep neural network (DNN) accelerators, varied precision or quantization levels, and model compression techniques, there is a need for design space exploration frameworks that incorporate quantization-aware processing elements into the accelerator design space while having accurate and fast power, performance, and area models. In this work, we present
QUIDAM
, a highly parameterized quantization-aware DNN accelerator and model co-exploration framework. Our framework can facilitate future research on design space exploration of DNN accelerators for various design choices such as bit precision, processing element type, scratchpad sizes of processing elements, global buffer size, number of total processing elements, and DNN configurations. Our results show that different bit precisions and processing element types lead to significant differences in terms of performance per area and energy. Specifically, our framework identifies a wide range of design points where performance per area and energy varies more than 5 × and 35 ×, respectively. With the proposed framework, we show that lightweight processing elements achieve on par accuracy results and up to 5.7 × more performance per area and energy improvement when compared to the best INT16 based implementation. Finally, due to the efficiency of the pre-characterized power, performance, and area models, QUIDAM can speed up the design exploration process by 3-4 orders of magnitude as it removes the need for expensive synthesis and characterization of each design.
As the machine learning and systems communities strive to achieve higher energy-efficiency through custom deep neural network (DNN) accelerators, varied bit precision or quantization levels, there is a need for design space exploration frameworks that incorporate quantization-aware processing elements (PE) into the accelerator design space while having accurate and fast power, performance, and area models. In this work, we present QADAM, a highly parameterized quantizationaware power, performance, and area modeling framework for DNN accelerators. Our framework can facilitate future research on design space exploration and Pareto-efficiency of DNN accelerators for various design choices such as bit precision, PE type, scratchpad sizes of PEs, global buffer size, number of total PEs, and DNN configurations. Our results show that different bit precisions and PE types lead to significant differences in terms of performance per area and energy. Specifically, our framework identifies a wide range of design points where performance per area and energy varies more than 5× and 35×, respectively. We also show that the proposed lightweight processing elements (LightPEs) consistently achieve Pareto-optimal results in terms of accuracy and hardware-efficiency. With the proposed framework, we show that LightPEs achieve on par accuracy results and up to 5.7× more performance per area and energy improvement when compared to the best INT16 based design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.