2022
DOI: 10.48550/arxiv.2207.10444
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine Learning assisted excess noise suppression for continuous-variable quantum key distribution

Abstract: Excess noise is a major obstacle to high-performance continuous-variable quantum key distribution (CVQKD), which is mainly derived from the amplitude attenuation and phase fluctuation of quantum signals caused by channel instability. Here, an excess noise suppression scheme based on equalization is proposed. In this scheme, the distorted signals can be corrected through equalization assisted by a neural network and pilot tone, relieving the pressure on the post-processing and eliminating the hardware cost. For… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(23 citation statements)
references
References 38 publications
0
23
0
Order By: Relevance
“…Generic CV-QKD protocol, where [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24] were applied during measurement, ref. [28] was applied to key sifting, ref.…”
Section: Figurementioning
confidence: 99%
See 4 more Smart Citations
“…Generic CV-QKD protocol, where [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24] were applied during measurement, ref. [28] was applied to key sifting, ref.…”
Section: Figurementioning
confidence: 99%
“…A linear activation function is commonly used in the output layer of NNs designed for regression, such that the output can take any continuous value, while sigmoid activation functions are commonly used for classification NNs to return discrete outputs. In the surveyed works, MLPs were applied to excess noise filtering in [12], parameter optimization in [26], reconciliation in [29], and key rate estimation in [30]. Figure 5a gives an example of a generic fully connected MLP, where the connections between the input layer, hidden layer, and output layer are shown, as well as an example perceptron in Figure 5b, indicating how the inputs and weights from the previous layer are transferred into an output via the activation function (following the path outlined in red).…”
Section: Machine Learningmentioning
confidence: 99%
See 3 more Smart Citations