ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9413457
|View full text |Cite
|
Sign up to set email alerts
|

ICASSP 2021 Acoustic Echo Cancellation Challenge: Datasets, Testing Framework, and Results

Abstract: The ICASSP 2021 Acoustic Echo Cancellation Challenge is intended to stimulate research in the area of acoustic echo cancellation (AEC), which is an important part of speech enhancement and still a top issue in audio communication and conferencing systems. Many recent AEC studies report good performance on synthetic datasets where the train and test samples come from the same underlying distribution. However, the AEC performance often degrades significantly on real recordings. Also, most of the conventional obj… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 51 publications
(22 citation statements)
references
References 16 publications
0
22
0
Order By: Relevance
“…The best results in each column and the best overall results are bolded and underlined, respectively. Samples of the processed audio clips can be found on our demo page 1 .…”
Section: Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…The best results in each column and the best overall results are bolded and underlined, respectively. Samples of the processed audio clips can be found on our demo page 1 .…”
Section: Results and Analysismentioning
confidence: 99%
“…Deep neural network based AEC algorithms flourish with the recent series of AEC-Challenges [1,2,3]. Westhausen et al [4] adopt a dual-signal transformation LSTM network (DTLN) and use both the microphone signal and the far-end signal to predict the near-end speech in an end-to-end fashion.…”
Section: Introductionmentioning
confidence: 99%
“…The "other degradation DMOS" test is dubbed as "NE", describing quality of NE speech. More details can be found in [24].…”
Section: Resultsmentioning
confidence: 99%
“…The model is trained with synthetic files from the database provided by Microsoft for the Interspeech 2021 AEC Challenge [24], in the following labeled as Dsyn. The audio files were created with varying conditions, including single-talk, double-talk, both NE and FE noise, as well as simulated nonlinear loudspeaker distortions.…”
Section: Datasetmentioning
confidence: 99%
See 1 more Smart Citation