2012
DOI: 10.1109/tasl.2011.2172943
|View full text |Cite
|
Sign up to set email alerts
|

Improving Robustness of Codebook-Based Noise Estimation Approaches With Delta Codebooks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…In [4], an efficient codebook search method is introduced to reduce the speech codebook search range for the ML estimate. In [5], the MMSE estimate requires a relatively small speech codebook with only 32 entries without severe degradation of the performance. In our experiment, the NB and WB speech LSF codebooks are trained using the LBG [6] method with 10-min of speech from the TIMIT database using the Itakura-Saito measure [2].…”
Section: Resultsmentioning
confidence: 99%
“…In [4], an efficient codebook search method is introduced to reduce the speech codebook search range for the ML estimate. In [5], the MMSE estimate requires a relatively small speech codebook with only 32 entries without severe degradation of the performance. In our experiment, the NB and WB speech LSF codebooks are trained using the LBG [6] method with 10-min of speech from the TIMIT database using the Itakura-Saito measure [2].…”
Section: Resultsmentioning
confidence: 99%
“…In (19), denotes the iteration index, is the index of the speech utterance in the database, denote the missing data of the EM algorithm, which are the sequence of the underlying states and speech gains. Furthermore, is the posterior state probability, which is defined as equation (21) at the bottom of the page.…”
Section: A Off-line Parameter Training Of Speech and Noise Sarhmmsmentioning
confidence: 99%
“…To address noise suppression in non-stationary environments, auto-regressive hidden Markov models (ARHMM) [15]- [17] and codebooks [18]- [21] have been used successfully to model the statistics of speech and noise for speech enhancement. In these methods, the speech and noise signals are modeled as AR processes [22].…”
Section: Introductionmentioning
confidence: 99%
“…Speech enhancement algorithms which employ trained models, such as codebooks [24][25][26][27][28], hidden Markov models (HMM) [29][30][31], Gaussian mixture models (GMM) [32], non-negative matrix factorization (NMF) models [33], dictionaries [34], etc., for speech and noise data are able to process noisy speech with sufficient accuracy even under nonstationary noise conditions. For example, codebook-based speech enhancement (CBSE) algorithms [25,26] estimate the noise power spectrum for short segments of noisy speech, thus tracking nonstationary noise better than the buffer-based methods [18].…”
Section: Introductionmentioning
confidence: 99%