2018 European Conference on Optical Communication (ECOC) 2018
DOI: 10.1109/ecoc.2018.8535430
|View full text |Cite
|
Sign up to set email alerts
|

ASIC Implementation of Time-Domain Digital Backpropagation with Deep-Learned Chromatic Dispersion Filters

Abstract: We consider time-domain digital backpropagation with chromatic dispersion filters jointly optimized and quantized using machine-learning techniques. Compared to the baseline implementations, we show improved BER performance and >40% power dissipation reductions in 28-nm CMOS.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
22
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 17 publications
(23 citation statements)
references
References 19 publications
(38 reference statements)
1
22
0
Order By: Relevance
“…The obtained FIR filters are as short as 5 or 3 (symmetric) taps per step, leading to very simple and efficient hardware implementation. This is confirmed by recent ASIC synthesis results which show that the power consumption of LDBP becomes comparable to linear equalization [24]. LDBP can also be extended to subband processing to enable low-complexity DBP for multi-channel or other wideband transmission scenarios [25].…”
Section: Optimization Resultssupporting
confidence: 64%
“…The obtained FIR filters are as short as 5 or 3 (symmetric) taps per step, leading to very simple and efficient hardware implementation. This is confirmed by recent ASIC synthesis results which show that the power consumption of LDBP becomes comparable to linear equalization [24]. LDBP can also be extended to subband processing to enable low-complexity DBP for multi-channel or other wideband transmission scenarios [25].…”
Section: Optimization Resultssupporting
confidence: 64%
“…The filters are then progressively pruned to a given target length by forcing the outermost taps to zero at certain iterations during SGD [26]. The zero forcing is done using a masking operation in TensorFlow.…”
Section: B Pre-training and Filter Pruningmentioning
confidence: 99%
“…In terms of complexity, it has been shown that the power consumption and chip area for time-domain DBP [48] and LDBP [26] are dominated by the linear steps, whereas the nonlinear steps have efficient hardware implementations using a Taylor expansion. Therefore, we focus on the linear steps for simplicity.…”
Section: Testingmentioning
confidence: 99%
See 2 more Smart Citations