Performance for optical fibre transmissions can be improved by digitally reversing the channel environment. When this is achieved by simulating short segment by separating the chromatic dispersion and Kerr nonlinearity, this is known as digital back-propagation (DBP). Time-domain DBP has the potential to decrease the complexity with respect to frequency domain algorithms. However, when using finer step in the algorithm, the accuracy of the individual smaller steps suffers. By adapting the chromatic dispersion filters of the individual steps to simulated or measured data this problem can be mitigated. Machine learning frameworks have enabled the gradient-descent style adaptation for large algorithms. This allows to adopt many dispersion filters to accurately represent the transmission in reverse.The proposed technique has been used in an experimental demonstration of learned time-domain DBP using a four channel 64-GBd dual-polarization 64-QAM signal transmission over a 10 span recirculating loop totalling 1014 km. The signal processing scheme consists of alternating finite impulse response filters with nonlinear phase shifts, where the filter coefficient were adapted using the experimental measurements. Performance gains to linear compensation in terms of signal-to-noise ratio improvements were comparable to those achieved with conventional frequencydomain DBP. Our experimental investigation shows the potential of digital signal processing techniques with learned parameters in improving the performance of high data rate long-haul optical fibre transmission systems.
We propose an automated heterogeneous trench-assisted multi-core fibre (MCF) design method. This method uses neural networks to speed up coating loss estimation by ∼ 10 6 times and using particle swarm optimization (PSO) algorithm to explore the optimal MCF design under various objectives and properties constraints. The latter reduces the permutation evaluations by ten orders of magnitude compared with the brute force method. The artificial intelligence (AI)-based method is used to design MCFs on two objectives: minimizing crosstalk (XT) and maximizing effective mode area ( eff ). By optimizing XT with different eff and cutoff wavelength constraints combinations for 6-core fibres, we achieved -92.1 dB/km ultra-low XT for C+L band fibre and -64 dB/km for E+S+C+L-band fibre. Meanwhile, we explored the upper limit of eff given different bandwidth constraints resulting in a 6.82 relative core multiplicity factor. We performed capacity analysis of fibres for two transmission lengths. It is shown that bandwidth is the dominant factor while the increase brought by eff and the penalty caused by XT are relevantly small. Our fibres exceed the cutoff-limited capacity of the 7-core fibre in literature by 35.1% and 84.8% for 1200 km and 6000 km transmission respectively.
6-core and 8-core trench-assisted heterogeneous fibres in standard cladding diameter are designed using artificial intelligence-based techniques including a cut-off wavelength regressor. The designs proposed here, for the first time, suppress crosstalk at 1550 nm of 8-core fibre to as low as −55 dB/km covering the whole S+C+L band while keeping coating loss below 0.001 dB/km. We compare them to reveal the influence of the additional cores in the 125 µm cladding diameter scenario. We report on the transmission characteristics and performance of the MCFs in terms of capacity and spatial spectral efficiency, including the influence of bandwidth, effective mode area, distance and crosstalk, for a range of transmission distances. The artificial intelligence-based method and insights given can be used to significantly speed up and tailor designs for a variety of telecom and datacom applications.
We demonstrate a genetic algorithm based system that can optimize optical interconnects using silicon photonic multi-core fibre coupled transceiver. The GA selects 48 parameters to deliver a minimum 6.9x10 -16 BER on channels with diverse losses.OCIS codes: (060.0060) Fiber Optics and optical communication; (060.2360) Fiber optics links and subsystems. IntroductionIt has been envisioned that the speed of the I/Os on semiconductor chips in computing systems will increase to reach capacities beyond 1 Pb/s by 2030 [1]. A front-runner application for fulfilling this purpose is a board-detachable optical transceiver in the form of a mid-board optic (MBO), which offers high bandwidth density and energy efficiency. In a recent work [2] a multi-processor system on chip (MPSoC) based memory disaggregated data center network (DCN) was reported using an optical circuit switched network using MBOs. In these disaggregated systems, high bandwidth density and high capacity optical transceivers are required to support dynamic all-optically routed communication between processors and remote high bandwidth memory (HBM) modules that can support over 1 Tb/s bandwidth.Moreover, to reduce the front panel port and bandwidth density in DCNs, multi-core fibre (MCF) coupled transceivers have been recently explored [3]. MCF based data centre networks (DCNs) have also shown to outperform wave division multiplexing (WDM) systems in terms of performance blocking, cost and energy efficiency [4]. Thus, it can be anticipated that the integration of MCFs with MBOs in future DCNs can lead to substantial performance gains. However, opto-electronic transceivers embedded on disaggregated CPUs and memory modules offer a multitude of control parameters. In practice, transceiver channels might offer different signal quality and experience various levels of degradation and attenuation throughout the network. Thus, the optimum selection of these parameters can maximize system power budget, potentially leading to forward error correction (FEC) free operation, which is essential in low-cost, low-complexity and low-latency DCNs.In this paper, we develop a purpose-made genetic algorithm (GA) and use it in real-time to optimize 48 equalization and amplification parameters for an 8 channels MCF-MBO driven by a Xilinx MPSoC with each channel operating at 10 Gb/s over an optical channel with diversely losses. The process took 13 hours instead of 1.44 x10 33 hours required for a brute force search method. Results suggest significant performance enactment in terms of bit error rate (BER).
PULSE’s ns-speed NP-hard network scheduler delivers skew-tolerant performance at 90% input loads. It achieves >90% throughput, 1.5-1.9 ms mean and 16-24 ms tail latency (99%) for up to 6:1 hot:cold skewed traffic in OCS DCN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.