2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) 2016
DOI: 10.1109/hpca.2016.7446049
|View full text |Cite
|
Sign up to set email alerts
|

Memristive Boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning

Abstract: The Boltzmann machine is a massively parallel computational model capable of solving a broad class of combinatorial optimization problems. In recent years, it has been successfully applied to training deep machine learning models on massive datasets. High performance implementations of the Boltzmann machine using GPUs, MPI-based HPC clusters, and FPGAs have been proposed in the literature. Regrettably, the required all-to-all communication among the processing units limits the performance of these efforts.This… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
84
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 181 publications
(83 citation statements)
references
References 64 publications
0
84
0
Order By: Relevance
“…Based on the neuromorphic devices and arrays shown above, neuromorphic computing systems can be further constructed. Since the VMM operation can be accelerated in memristor crossbar thanks to the parallel, in‐memory, and analog characteristics of memristors, a large variety of neural networks can be accelerated, and neuromorphic systems capable of encoding and processing spatiotemporal information might also be built by exploiting device dynamics such as short‐term plasticity . Figure summarizes typical network‐level demonstrations to date that have been achieved experimentally, grouped according to the type of devices and algorithms used.…”
Section: Neural Network Accelerators Based On Memristorsmentioning
confidence: 99%
See 1 more Smart Citation
“…Based on the neuromorphic devices and arrays shown above, neuromorphic computing systems can be further constructed. Since the VMM operation can be accelerated in memristor crossbar thanks to the parallel, in‐memory, and analog characteristics of memristors, a large variety of neural networks can be accelerated, and neuromorphic systems capable of encoding and processing spatiotemporal information might also be built by exploiting device dynamics such as short‐term plasticity . Figure summarizes typical network‐level demonstrations to date that have been achieved experimentally, grouped according to the type of devices and algorithms used.…”
Section: Neural Network Accelerators Based On Memristorsmentioning
confidence: 99%
“…Due to the limitations in device performance, memristive hardware is commonly considered more suitable for circumstances where the algorithm itself can tolerate device variations or relative lower precisions . In stark contrast to ANN, there are still many tasks that involve high precision computing, such as numerical simulations.…”
Section: Other Arithmetic Acceleratorsmentioning
confidence: 99%
“…Resistive technologies have shown excellent characteristics for the future memory systems capable of storing large amounts of data and performing in-memory computation [31][32][33]. Resistive RAM (RRAM) is one of the most promising memristive devices currently under commercial development that exhibits excellent scalability, high-speed switching, a high dynamic resistance range that permits multi-level cells (MLC), and low power consumption [34].…”
Section: Memristive Crosspoint Arraysmentioning
confidence: 99%
“…k-means, k-nearest neighbors, naive bayes, support vector machines, linear regression, classification trees and deep neural networks. Bojnordi et al [28] develop a memristive Boltzmann machine for large scale combinatorial optimization and deep learning. They demonstrate their accelerator on the graph partitioning and boolean satisfiability problems, and obtain 57× higher performance and 25× lower energy.…”
Section: Acceleratorsmentioning
confidence: 99%