2018 European Conference on Optical Communication (ECOC) 2018
DOI: 10.1109/ecoc.2018.8535153
|View full text |Cite
|
Sign up to set email alerts
|

Computational-Complexity Comparison of Artificial Neural Network and Volterra Series Transfer Function for Optical Nonlinearity Compensation with Time- and Frequency-Domain Dispersion Equalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 7 publications
0
11
0
Order By: Relevance
“…Compared with DD, NNs can support much longer fiber transmission. The maximum fiber length can be extended to 19,22,23,27 km with the help of RBF-NN, F-NN, L-RNN, AR-RNN, respectively. We also notice that as the fiber length increases, there exists an intersection between RBF-NN and F-NN at a fiber length of around 17 km.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with DD, NNs can support much longer fiber transmission. The maximum fiber length can be extended to 19,22,23,27 km with the help of RBF-NN, F-NN, L-RNN, AR-RNN, respectively. We also notice that as the fiber length increases, there exists an intersection between RBF-NN and F-NN at a fiber length of around 17 km.…”
Section: Resultsmentioning
confidence: 99%
“…The computational complexity of a simple F-NN and a Volterra-series-based nonlinear equalizer for coherent transmission systems have been compared in [27]. It is shown that a F-NN based nonlinear equalizer involves lower computational complexity than a Volterra series-based equalizer for equivalent BER performance.…”
Section: Introductionmentioning
confidence: 99%
“…The number of multipliers was used to represent the complexity of the algorithm. The method for calculating the complexity of MLSE was the same as that in [30], and the method for the NN in [31] was adopted here. For an ANN, N ep denotes the number of completed epochs required for training; n i , n hid1 , n hid2 and n 0 are the number of neurons in the input, first hidden, second hidden, and output layers, respectively; for MLSE, M and L represent the number of M-ary modulations and memory lengths of MLSE, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…We also compared the complexity of the proposed KNN algorithm with artificial neural network [13]. The complexity calculation of ANN [27,28] and KNN are listed in Table 2, in which the complexity calculation involves two parts, namely the training part and the prediction part. For ANN, N ep is the number of samples in a training set and n i , n hid and n o are the number of neurons on the input, hidden and output layers, respectively.…”
Section: Experimental Verification and Discussionmentioning
confidence: 99%