2021
DOI: 10.1364/oe.416672
|View full text |Cite
|
Sign up to set email alerts
|

Bi-directional gated recurrent unit neural network based nonlinear equalizer for coherent optical communication system

Abstract: We propose a bi-directional gated recurrent unit neural network based nonlinear equalizer (bi-GRU NLE) for coherent optical communication systems. The performance of bi-GRU NLE has been experimentally demonstrated in a 120 Gb/s 64-quadrature amplitude modulation (64-QAM) coherent optical communication system with a transmission distance of 375 km. Experimental results show that the proposed bi-GRU NLE can significantly mitigate nonlinear distortions. The Q-factors can exceed the hard-decision forward error cor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 91 publications
(33 citation statements)
references
References 30 publications
0
32
0
1
Order By: Relevance
“…Thus, the NN equalizer, which is reverting the channel nonlinear effects, continues to function well for other modulation formats. This is in stark contrast to the case of classification equalizers (classifiers) [46][47][48], because the latter incorporates the decision boundaries in the NN structure itself. For the classifiers, the S-NN will not work with the new target task since its output stage does not capture the different symbol alphabet.…”
Section: Transfer Learning For Different Modulation Formatsmentioning
confidence: 99%
“…Thus, the NN equalizer, which is reverting the channel nonlinear effects, continues to function well for other modulation formats. This is in stark contrast to the case of classification equalizers (classifiers) [46][47][48], because the latter incorporates the decision boundaries in the NN structure itself. For the classifiers, the S-NN will not work with the new target task since its output stage does not capture the different symbol alphabet.…”
Section: Transfer Learning For Different Modulation Formatsmentioning
confidence: 99%
“…Thus, the NN equalizer, which is basically reverting the channel nonlinear effects, continues to function as well for other modulation formats. This is in stark contrast to the case of classification equalizers (classifiers) [43][44][45] because classification equalizers incorporate the hard-decision boundaries into the NN structure itself. For the classifiers, the S-NN will not work with the new target task since its output stage will not capture the different symbol alphabet.…”
Section: Transfer Learning To Different Modulation Formats Scenariosmentioning
confidence: 97%
“…Furthermore, the re-training process needs much less data, with a reduction of up to 99% of the required training data for the 3 considered symbol rates. The number of required epochs for the TWC fiber link(Case II), decreased by 92%, 73%, and 75%, respectively, when switching to (d) 45 GBd, (e) 65 GBd, and (f) 85 GBd. We can also see that the retraining phase needs less information: 95%, 95%, and 90%, respectively.…”
Section: Transfer Learning To Different Symbol Rate Scenariosmentioning
confidence: 99%
“…In the whole matching layer, a bi-directional gated recurrent unit network [27] is used to realize parameter transfer, so forward and backward transfer is distinguished during comparison. Assuming that the return result is x β j after comparing the sentence unit σ β j in question Q with the last sentence unit in answer A, then:…”
Section: Multi-strategy Interaction Layermentioning
confidence: 99%