Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1344
|View full text |Cite
|
Sign up to set email alerts
|

Kernel Machines Beat Deep Neural Networks on Mask-Based Single-Channel Speech Enhancement

Abstract: We apply a fast kernel method for mask-based single-channel speech enhancement. Specifically, our method solves a kernel regression problem associated to a non-smooth kernel function (exponential power kernel) with a highly efficient iterative method (EigenPro). Due to the simplicity of this method, its hyper-parameters such as kernel bandwidth can be automatically and efficiently selected using line search with subsamples of training data. We observe an empirical correlation between the regression loss (mean … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 9 publications
(14 reference statements)
1
10
0
Order By: Relevance
“…If it is restricted to the unit sphere, the RKHS of the exponential power kernel with γ < 1 is even larger than that of NTK. This result partially explains the observation in [19] that the best performance is attained by a highly non-smooth exponential power kernel with γ < 1. Geifman et al [18] applied the exponential power kernel and the NTK to classification and regression tasks on the UCI dataset and other large scale datasets.…”
Section: Introductionsupporting
confidence: 75%
See 2 more Smart Citations
“…If it is restricted to the unit sphere, the RKHS of the exponential power kernel with γ < 1 is even larger than that of NTK. This result partially explains the observation in [19] that the best performance is attained by a highly non-smooth exponential power kernel with γ < 1. Geifman et al [18] applied the exponential power kernel and the NTK to classification and regression tasks on the UCI dataset and other large scale datasets.…”
Section: Introductionsupporting
confidence: 75%
“…For example, Belkin et al [7] showed experimentally that the Laplace kernel and neural networks had similar performance in fitting random labels. In the task of speech enhancement, exponential power kernels K γ,σ exp (x, y) = e − x−y γ /σ , which include the Laplace kernel as a special case, outperform deep neural networks with even shorter training time [19]. The experiments in [18] also exhibited similar performance of the Laplace kernel and the NTK.…”
Section: Introductionmentioning
confidence: 75%
See 1 more Smart Citation
“…Instead, we use them to study the ability of our approach to sample condensation without impairing the classification performance of kernel machines. The power of kernel machines themselves as classifier has already been demonstrated in the literature [4,9].…”
Section: Data Setsmentioning
confidence: 99%
“…Compared to deep neural networks (DNN), they can be interpreted as two-layer NNs. Despite the simplicity, however, kernel machines turned out to be a good alternative to DNNs, capable of matching and even surpassing their performance while utilizing less computational resources in training [8,9].…”
Section: Introductionmentioning
confidence: 99%