2020
DOI: 10.1109/jstqe.2019.2941485
|View full text |Cite
|
Sign up to set email alerts
|

Photonic Multiply-Accumulate Operations for Neural Networks

Abstract: It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
142
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 225 publications
(145 citation statements)
references
References 112 publications
3
142
0
Order By: Relevance
“…Here, we discuss the computational efficiency of the proposed circuit. Although there are many indices for expressing the performance of a computing device, the multiply-accumulate per second speed (MAC•s −1 ) is now widely considered to be a milestone in the photonic neuromorphic computation region 9,53,54 . Thus, we discuss this index for the reservoir computer.…”
Section: Discussionmentioning
confidence: 99%
“…Here, we discuss the computational efficiency of the proposed circuit. Although there are many indices for expressing the performance of a computing device, the multiply-accumulate per second speed (MAC•s −1 ) is now widely considered to be a milestone in the photonic neuromorphic computation region 9,53,54 . Thus, we discuss this index for the reservoir computer.…”
Section: Discussionmentioning
confidence: 99%
“…As the calculation is performed by measuring the optical transmission of reconfigurable and non-resonant (Figure 16(b)), i.e., broadband, passive components operation at a bandwidth exceeding 14 GHz (Figure 16(c)), the designed photonic core exhibits the computational potential at the speed of light at very low power, thus providing an effective method to remove the computing bottleneck in machine learning hardware for applications ranging from live video processing to autonomous driving and Ai-aided life saving applications. In addition to above findings, Prucnal et al [102][103][104] also proposed a photonic network, i.e., digital electronics and analogue photonics (DEAP), suited for convolutional neural networks based on silicon photonics technologies ( Figure 16(d)). DEAP was estimated to perform convolutions between 2.8 and 14  faster than a GPU while roughly using 25% less energy.…”
Section: On-chip Computational Memorymentioning
confidence: 79%
“…We harness the PMMC’s high-precision programmability and in-memory computing capability to demonstrate an optical convolutional neural network (OCNN) 28 30 , 48 . A typical CNN consists of an input layer and an output layer, which are connected by multiple hidden layers in between.…”
Section: Resultsmentioning
confidence: 99%