2021
DOI: 10.1109/jphot.2021.3123624
|View full text |Cite
|
Sign up to set email alerts
|

A Fiber Nonlinearity Compensation Scheme With Complex-Valued Dimension-Reduced Neural Network

Abstract: A fiber nonlinearity compensation scheme based on a complex-valued dimension-reduced neural network is proposed. The proposed scheme performs all calculations in complex values and employs a dimension-reduced triplet feature vector to reduce the size of the input layer. Simulation and experiment results show that the proposed neural network needed only 20% of computational complexity to reach the saturated performance gain of the real-valued triplet-input neural network, and had a similar saturated gain to the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 28 publications
(22 reference statements)
0
4
0
Order By: Relevance
“…The authors of Ref. [96] focused on coherent transmission. In this case, a complex-valued dimension-reduced triplet input neural network was proposed and experimentally tested with a 16-QAM 80 Gbps single polarization signal transmitted along 1800 km of SSMF (100 km SSMF loop).…”
Section: Weightsmentioning
confidence: 99%
“…The authors of Ref. [96] focused on coherent transmission. In this case, a complex-valued dimension-reduced triplet input neural network was proposed and experimentally tested with a 16-QAM 80 Gbps single polarization signal transmitted along 1800 km of SSMF (100 km SSMF loop).…”
Section: Weightsmentioning
confidence: 99%
“…The real numbers are usually represented in FP32 format with 32 bits, or in FP64 with 64 bits. To implement the NN in memory or computationally-constrained environments, it is desirable to represent the weights, biases, activations and the input data with fewer bits [14]. To do this, a full-precision real number w ∈ R is mapped by a quantizer Q(.)…”
Section: B Quantization Of the Neural Networkmentioning
confidence: 99%
“…However, the algorithm required an operation with at least two samples per symbol, which introduced additional complexity. By contrast, another approach employed perturbation theory and ML algorithms to extract information from the received data to figure out the nonlinear impairments experienced [45]- [47]. This version of fiber nonlinearity compensation algorithms, with the use of the triplets from the perturbation theory as input features, aimed at creating nonlinear function and tensor weights and treated the nonlinear equalization as a regression problem, which is a feature engineering.…”
Section: Introductionmentioning
confidence: 99%