2020
DOI: 10.1109/jlt.2020.2994220
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting Efficient Multi-Step Nonlinearity Compensation With Machine Learning: An Experimental Demonstration

Abstract: Efficient nonlinearity compensation in fiber-optic communication systems is considered a key element to go beyond the "capacity crunch". One guiding principle for previous work on the design of practical nonlinearity compensation schemes is that fewer steps lead to better systems. In this paper, we challenge this assumption and show how to carefully design multi-step approaches that provide better performance-complexity tradeoffs than their few-step counterparts. We consider the recently proposed learned digit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(23 citation statements)
references
References 57 publications
0
23
0
Order By: Relevance
“…We assume here that signals at each channel propagate at the carrier frequency and after the linear step group delay corresponding to channel frequency and distance propagated is compensated. To account for the group delay difference, a realvalued fractional delay (FD) filter for each spectral channel can be used after convolutional layer at each linear step [25], [26]. Furthermore, to reduce the computational complexity, we set the step length so that the time delay for each channel is divisible by the duration of the symbol interval T , as it was suggested in [20].…”
Section: B Convolution Layers For Chromatic Dispersion Compensationmentioning
confidence: 99%
“…We assume here that signals at each channel propagate at the carrier frequency and after the linear step group delay corresponding to channel frequency and distance propagated is compensated. To account for the group delay difference, a realvalued fractional delay (FD) filter for each spectral channel can be used after convolutional layer at each linear step [25], [26]. Furthermore, to reduce the computational complexity, we set the step length so that the time delay for each channel is divisible by the duration of the symbol interval T , as it was suggested in [20].…”
Section: B Convolution Layers For Chromatic Dispersion Compensationmentioning
confidence: 99%
“…Although [4] has demonstrated that joint optimization of the filters can make time-domain filtering effective for LDBP, with a desirable complexity reduction for hardware implementations, we have chosen to restrict the analysis of this work to frequency-domain implementation of LDBP, so its performance can be directly compared with the equivalent untrained conventional DBP, with no discussion on the complexity of the different architectures. This choice of implementation also allows us to use unsupervised learning, which greatly simplifies the training procedure and improves robustness of possible real-time implementations.…”
Section: Neural Network and Ldbpmentioning
confidence: 99%
“…Noticing the structural similarity between DBP and artificial neural networks (ANN), recent works [3,4] have proposed combining data-driven optimization techniques developed for ANN and the structure of DBP, creating an architecture that Lucas Silva Schanner, Optical Communication Solutions, CPQD, Campinas-SP, e-mail: schanner@cpqd.com.br;…”
Section: Introductionmentioning
confidence: 99%
“…A variant of the NN-based NLC algorithm, called learned DBP (LDBP), treated the linear steps and the nonlinear steps of SSFM as the linear layers and the nonlinear activation functions of an NN [15], [16]. LDBP showed a significant performance gain compared with the DBP in the experimental settings [17]- [19]. The taps in linear layers can be extremely short by gradually pruning them during training [17].…”
Section: Introductionmentioning
confidence: 99%
“…LDBP showed a significant performance gain compared with the DBP in the experimental settings [17]- [19]. The taps in linear layers can be extremely short by gradually pruning them during training [17]. In this case, the convolutions of linear layers can be performed in the time domain for low power consumption [20].…”
Section: Introductionmentioning
confidence: 99%