2022
DOI: 10.1109/jlt.2022.3171831
|View full text |Cite
|
Sign up to set email alerts
|

Neuromorphic Silicon Photonics and Hardware-Aware Deep Learning for High-Speed Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 41 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…20,21 Remarkable progress has been witnessed during the last five years in the field of neuromorphic photonics across all necessary constituent technology blocks, including MVM photonic architectures, 17,[22][23][24][25][26][27][28] individual photonic computational elements, [29][30][31][32] nonlinear activations, [33][34][35][36] and photonic hardware-aware training models. 37,38 All these demonstrations have highlighted the potential for energyefficient and high-speed DNNs by utilizing low-speed weight encoding technologies and a rather small amount of neurons, validating their credentials to support inference within small scale neural network (NN) topologies that can fit in a practical silicon photonic (SiPho) chip.…”
Section: Introductionmentioning
confidence: 91%
See 1 more Smart Citation
“…20,21 Remarkable progress has been witnessed during the last five years in the field of neuromorphic photonics across all necessary constituent technology blocks, including MVM photonic architectures, 17,[22][23][24][25][26][27][28] individual photonic computational elements, [29][30][31][32] nonlinear activations, [33][34][35][36] and photonic hardware-aware training models. 37,38 All these demonstrations have highlighted the potential for energyefficient and high-speed DNNs by utilizing low-speed weight encoding technologies and a rather small amount of neurons, validating their credentials to support inference within small scale neural network (NN) topologies that can fit in a practical silicon photonic (SiPho) chip.…”
Section: Introductionmentioning
confidence: 91%
“…Yet, as transistor scaling is stagnating, 10 a high number of alternative emerging technologies have been investigated toward boosting energy efficiency and performance scaling, e.g., optoelectronic memristors, 11 15 nanophotonics, 16 , 17 and spintronics, 18 , 19 with brain-inspired photonic accelerators forming one of the key candidate platforms for future AI computing engines due to their inherent credentials to support time-of-flight latencies and terahertz bandwidths 20 , 21 . Remarkable progress has been witnessed during the last five years in the field of neuromorphic photonics across all necessary constituent technology blocks, including MVM photonic architectures, 17 , 22 28 individual photonic computational elements, 29 32 nonlinear activations, 33 36 and photonic hardware-aware training models 37 , 38 . All these demonstrations have highlighted the potential for energy-efficient and high-speed DNNs by utilizing low-speed weight encoding technologies and a rather small amount of neurons, validating their credentials to support inference within small scale neural network (NN) topologies that can fit in a practical silicon photonic (SiPho) chip.…”
Section: Introductionmentioning
confidence: 99%
“…It is worth noting that many of the limitations encountered in the PNN can be eventually alleviated by enforcing a hardware-aware DL training framework where the training algorithm incorporates the physical layer limitations of the underlying photonic hardware a priori in the training process [8,9]. Accounting for quantization (limited bit resolution) [10,11], limited ER or bandwidth has already been demonstrated as a viable solution for performance upgrade [8,9].…”
Section: Network Performance Analysismentioning
confidence: 99%
“…Having most real-world information analog in nature supports the trend of transitioning from digital domain to the analog one, especially in the light of recent advances in low precision [6] and noise-resilient training algorithms [7][8][9][10][11]. The superiority of the electronic crossbars has already been recognized by Syntiant [12], MemryX [13] and Mythic [14], with photonic platforms striving to produce an equivalent circuit for analog neuromorphic photonic setups, as shown by the works of Lightmatter [15,16] and Lightelligence [17,18] in commercial-grade large-scale coherent photonic integrated circuits (PICs).…”
Section: Introductionmentioning
confidence: 99%
“…Typical PNNs include fiber optic networks, [8][9][10] free space optic networks, [11,12] and integrated photonic circuits. [13][14][15] Each solution has its own characteristics, but they all share the same goal of exploiting the potential benefits of PNNs' large bandwidth and high energy efficiency [13,[16][17][18][19][20][21] to break bread with electronic digital processors represented by graphics processing units in the third artificial intelligence boom. However, increasing problems have been exposed with increasing related studies on PNN.…”
Section: Introductionmentioning
confidence: 99%