2015
DOI: 10.1038/ncomms9941
|View full text |Cite
|
Sign up to set email alerts
|

An event-based architecture for solving constraint satisfaction problems

Abstract: Constraint satisfaction problems are ubiquitous in many domains. They are typically solved using conventional digital computing architectures that do not reflect the distributed nature of many of these problems, and are thus ill-suited for solving them. Here we present a parallel analogue/digital hardware architecture specifically designed to solve such problems. We cast constraint satisfaction problems as networks of stereotyped nodes that communicate using digital pulses, or events. Each node contains an osc… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
40
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(40 citation statements)
references
References 30 publications
0
40
0
Order By: Relevance
“…Finally, memristive devices are receiving increasing interest for the development of other computing concepts by neuromorphic networks with high computational power, such as the Hopfield recurrent neural network [188]. Although high acceleration performance has been achieved for the solution of hard constraint-satisfaction problems (CSPs), such as the Sudoku puzzle, via CMOS-based circuits [189], FPGA [190], and quantum computing circuits [191], the use of memristive devices in crossbar-based neural networks can further speed up computation by the introduction of a key resource as the noise [192] without the requirement of additional sources [193]. Moreover, very recent studies have also evidenced the strong potential of memristive devices for the execution of complex algebraic tasks, including the solution of linear systems and differential equations, such as the Schrödinger and Fourier equations, in crossbar arrays in only one computational step [16], thus overcoming the latency of iterative approaches [15].…”
Section: Discussionmentioning
confidence: 99%
“…Finally, memristive devices are receiving increasing interest for the development of other computing concepts by neuromorphic networks with high computational power, such as the Hopfield recurrent neural network [188]. Although high acceleration performance has been achieved for the solution of hard constraint-satisfaction problems (CSPs), such as the Sudoku puzzle, via CMOS-based circuits [189], FPGA [190], and quantum computing circuits [191], the use of memristive devices in crossbar-based neural networks can further speed up computation by the introduction of a key resource as the noise [192] without the requirement of additional sources [193]. Moreover, very recent studies have also evidenced the strong potential of memristive devices for the execution of complex algebraic tasks, including the solution of linear systems and differential equations, such as the Schrödinger and Fourier equations, in crossbar arrays in only one computational step [16], thus overcoming the latency of iterative approaches [15].…”
Section: Discussionmentioning
confidence: 99%
“…These devices used sub-threshold analogue circuits and demonstrated spiking deep neural networks with low latency and very high-power efficiency compared with deep networks running on a conventional digital cluster machine. More recent works showed that complementary metal-oxide-semiconductor (CMOS)-based neuromorphic multi-core processors with one million neurons and 256 million synapses reduced the operative power consumption by a factor 10 4 with respect to the conventional CMOS architectures [26], and high-operation efficiency was also demonstrated in analog circuits with LIF neurons and silicon synapses [27,28].…”
Section: Early Work and Large-scale Neuromorphic Systemsmentioning
confidence: 99%
“…Such X. Yin systems have received increasing attention in recent years (e.g., [8]- [10]), including parallel analog implementations, see [11]. In analog computing [12], the algorithm (representing the "software") is a dynamical system often expressed in the form of differential equations running in continuous time over real numbers, and its physical implementation (the "hardware") is any physical system, such as an analog circuit, whose behavior is described by the corresponding dynamical system.…”
Section: Introductionmentioning
confidence: 99%
“…It is quite possible that from an engineering point of view the ideal approach combines both types of tradeoffs: time vs energy and time vs hardware (distributed). The heuristic stochastic search in [11] is effectively a simulated annealing method, which implies high exponential runtimes for worst case formulas. In contrast, the analog approach in [1] is fully deterministic and extracts maximum information about the solution, embedded implicitly within the system of clauses and can solve efficiently the hardest benchmark SAT problems -at an energetic cost [1].…”
Section: Introductionmentioning
confidence: 99%