In this paper, we propose a transformer-based architecture, called two-stage transformer neural network (TSTNN) for end-to-end speech denoising in the time domain. The proposed model is composed of an encoder, a two-stage transformer module (TSTM), a masking module and a decoder. The encoder maps input noisy speech into feature representation. The TSTM exploits four stacked two-stage transformer blocks to efficiently extract local and global information from the encoder output stage by stage. The masking module creates a mask which will be multiplied with the encoder output. Finally, the decoder uses the masked encoder feature to reconstruct the enhanced speech. Experimental results on the benchmark dataset show that the TSTNN outperforms most state-of-theart models in time or frequency domain while having significantly lower model complexity.
In this paper, we propose a transformer neural network based U-Net architecture, called context-aware U-Net (CAUNet), for end-to-end speech denoising in time domain. The proposed model adopts the dilated-dense block in both encoder and decoder layers of the U-Net to strengthen feature propagation and enlarge the receptive field of features. It also uses stacked two-stage transformer blocks to efficiently extract local and global contextual information from the encoder output, based on which the enhanced speech is reconstructed at the decoder. Experimental results show that our model outperforms most state-of-the-art methods in time and frequency domains, while it maintains a relatively low model complexity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.