Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures 2018
DOI: 10.1145/3210377.3210410
|View full text |Cite
|
Sign up to set email alerts
|

Constant-Depth and Subcubic-Size Threshold Circuits for Matrix Multiplication

Abstract: Boolean circuits of McCulloch-Pitts threshold gates are a classic model of neural computation studied heavily in the late 20th century as a model of general computation. Recent advances in large-scale neural computing hardware has made their practical implementation a near-term possibility. We describe a theoretical approach for multiplying two N by N matrices that integrates threshold gate logic with conventional fast matrix multiplication algorithms, that perform O(N ω ) arithmetic operations for a positive … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
5

Relationship

4
6

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 18 publications
(23 reference statements)
0
6
0
Order By: Relevance
“…It is well-known that the extraction of bits from a weighted sum, as well as multiplication of binary numbers, can be carried out by threshold circuits -hence also by SNNs-with a small number of layers -typically 2 or 3-that does not depend on the bit length of the binary numbers involved, however at the cost of increasing the number of neurons to a low-degree polynomial of this bit length. A recent summary of such results is provided in section 3 of (Parekh et al, 2018). Hence one can replace the basic architectures of the AMOS units from Fig.…”
Section: Trade-off Between Latency and Network Size Of The Snnsmentioning
confidence: 99%
“…It is well-known that the extraction of bits from a weighted sum, as well as multiplication of binary numbers, can be carried out by threshold circuits -hence also by SNNs-with a small number of layers -typically 2 or 3-that does not depend on the bit length of the binary numbers involved, however at the cost of increasing the number of neurons to a low-degree polynomial of this bit length. A recent summary of such results is provided in section 3 of (Parekh et al, 2018). Hence one can replace the basic architectures of the AMOS units from Fig.…”
Section: Trade-off Between Latency and Network Size Of The Snnsmentioning
confidence: 99%
“…Such properties position Fugu to help explore under what parameterization or scale a neural approach may offer an advantage. For example, prior work has analyzed neural algorithms for computational kernels like sorting, optimization, and graph analytics identifying different regions in which a neural advantage exists accounting for neural circuit setup, timing, or other factors [10], [11], [12], [13].…”
Section: Fugumentioning
confidence: 99%
“…An alternative version of this algorithm streams the inputs in over time and uses delays to make the same computation with fewer neurons, albeit at an extended time cost, a characteristic that will produce di erent neural circuit and associated metadata. It is also important to note that this metadata may be a function of input parameters as wellfor instance in the matrix-multiplication application described in [7], there is a version of the algorithm whose depth is O(log log N ), where N is the size of the largest matrix dimension.…”
Section: 21mentioning
confidence: 99%