2012
DOI: 10.1007/978-3-642-35292-8_22
|View full text |Cite
|
Sign up to set email alerts
|

Speech Denoising Using Non-negative Matrix Factorization with Kullback-Leibler Divergence and Sparseness Constraints

Abstract: Abstract.A speech denoising method based on Non-Negative Matrix Factorization (NMF) is presented in this paper. With respect to previous related works, this paper makes two contributions. First, our method does not assume a priori knowledge about the nature of the noise. Second, it combines the use of the Kullback-Leibler divergence with sparseness constraints on the activation matrix, improving the performance of similar techniques that minimize the Euclidean distance and/or do not consider any sparsification… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
2
2
2

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 11 publications
(21 reference statements)
0
3
0
Order By: Relevance
“…The Kullback-Leibler divergence results in a non-negative quantity and is unbounded. In this work, the KL divergence is considered because it has recently been used, with good results, in audio processing tasks such as speech enhancement and denoising for automatic speech recognition [ 21 , 28 ], feature extraction [ 22 ] or acoustic event classification [ 22 , 29 ]. To find a local optimum value for the KL divergence between V and ( W H ), an iterative scheme with multiplicative update rules can be used as proposed in [ 27 ] and stated in Eqs ( 3 ) and ( 4 ), where 1 is a matrix of size V , whose elements are all ones, and the multiplications ⊗ and divisions are component-wise operations.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The Kullback-Leibler divergence results in a non-negative quantity and is unbounded. In this work, the KL divergence is considered because it has recently been used, with good results, in audio processing tasks such as speech enhancement and denoising for automatic speech recognition [ 21 , 28 ], feature extraction [ 22 ] or acoustic event classification [ 22 , 29 ]. To find a local optimum value for the KL divergence between V and ( W H ), an iterative scheme with multiplicative update rules can be used as proposed in [ 27 ] and stated in Eqs ( 3 ) and ( 4 ), where 1 is a matrix of size V , whose elements are all ones, and the multiplications ⊗ and divisions are component-wise operations.…”
Section: Methodsmentioning
confidence: 99%
“…The procedure for obtaining the acoustic patterns is shown in Fig 2 , and it is similar to the NMF-based supervised method presented in [ 28 ] in which the same idea is utilized to build models of clean speech and noise.…”
Section: Methodsmentioning
confidence: 99%
“…In this work, the KL divergence is considered because it has been recently used with good results in speech processing tasks, such as speech enhancement and denoising for ASR tasks (Wilson et al, 2008;Ludeña-Choez & Gallardo-Antolín, 2012) or feature extraction (Schuller et al, 2010). In order to find a local optimum value for the KL divergence between V and (WH), an iterative scheme with multiplicative update rules can be used as proposed in (Lee & Seung, 1999) and stated in (7),…”
Section: Non-negative Matrix Factorization (Nmf)mentioning
confidence: 99%