In-band full-duplex systems allow for more efficient use of temporal and spectral resources by transmitting and receiving information at the same time and on the same frequency. However, this creates a strong self-interference signal at the receiver, making the use of self-interference cancellation critical. Recently, neural networks have been used to perform digital self-interference with lower computational complexity compared to a traditional polynomial model. In this paper, we examine the use of advanced neural networks, such as recurrent and complex-valued neural networks, and we perform an in-depth network architecture exploration. Our neural network architecture exploration reveals that complex-valued neural networks can significantly reduce both the number of floating-point operations and parameters compared to a polynomial model, whereas the real-valued networks only reduce the number of floating-point operations. For example, at a digital self-interference cancellation of 44.51 dB, a complex-valued neural network requires 33.7 % fewer floating-point operations and 26.9 % fewer parameters compared to the polynomial model. 500
published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the "Taverne" license above, please follow below link for the End User
Multicore processors need to communicate when working on shared tasks. In classical systems, this is performed via shared objects protected by locks, which are implemented with atomic operations on the main memory. However, access to shared main memory is already a bottleneck for multicore processors. Furthermore, the access time to a shared memory is often hard to predict and therefore problematic for real-time systems.This paper presents a shared on-chip memory that is used for communication and supports atomic operations to implement locks. Access to the shared memory is arbitrated with time division multiplexing, providing time-predictable access. The shared memory supports extended time slots so that a processor can execute more than one memory operation atomically. This allows for the implementation of locking and other synchronization primitives.We evaluate this shared scratchpad memory with synchronization support on a 9-core version of the T-CREST multicore platform. Worst-case access latency to the shared scratchpad is 13 clock cycles. Access to the atomic section under full contention, when every processor core wants access to acquire a lock, is 135 clock cycles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.