2012
DOI: 10.1109/tasl.2012.2186807
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Linear Prediction and Its Applications to Speech Processing

Abstract: The aim of this paper is to provide an overview of Sparse Linear Prediction, a set of speech processing tools created by introducing sparsity constraints into the linear prediction framework. These tools have shown to be effective in several issues related to modeling and coding of speech signals. For speech analysis, we provide predictors that are accurate in modeling the speech production process and overcome problems related to traditional linear prediction. In particular, the predictors obtained offer a mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
122
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 111 publications
(123 citation statements)
references
References 44 publications
1
122
0
Order By: Relevance
“…Stability is intrinsically guaranteed by the construction of the problem [3] and can be easily preserved by the numerical robustness of the Levinson recursion. Nevertheless, in LPC of speech, sparsity criteria have been shown to provide a valid alternative to the 2-norm minimization criterion, overcoming most of its deficiencies in modeling and coding [4][5][6][7][8]. In particular, in [6], a new formulation for speech coding is introduced that provides not only a sparse approximation of the prediction error, which allows for a simple coding strategy, but also a sparse approximation of a high-order predictor which successfully models jointly short-term and long-term redundancies.…”
Section: Introductionmentioning
confidence: 99%
“…Stability is intrinsically guaranteed by the construction of the problem [3] and can be easily preserved by the numerical robustness of the Levinson recursion. Nevertheless, in LPC of speech, sparsity criteria have been shown to provide a valid alternative to the 2-norm minimization criterion, overcoming most of its deficiencies in modeling and coding [4][5][6][7][8]. In particular, in [6], a new formulation for speech coding is introduced that provides not only a sparse approximation of the prediction error, which allows for a simple coding strategy, but also a sparse approximation of a high-order predictor which successfully models jointly short-term and long-term redundancies.…”
Section: Introductionmentioning
confidence: 99%
“…, i − p − 1, and the optimal choice of these M − p indices yields the second term W i−p−1 (M − p). Since the index i − p is not included, the two terms simply add as in (27). The sum is then maximized over all choices of p. For p = 0, the right-hand side of (28) reduces to W i−1 (M ), i.e., the last index is not used.…”
Section: Block-diagonal Qmentioning
confidence: 99%
“…Sparse linear prediction for speech coding is proposed in [27] using iteratively reweighted 1 minimization to promote sparsity in the residuals and improve coding performance.…”
Section: Introductionmentioning
confidence: 99%
“…The algorithm starts by plain l 1 -norm minimization and then, iteratively, the resulting residuals are used to re-weight the l 1 -norm cost function such that the points having larger residuals (outliers) are less penalized and the points having smaller residuals are penalized heavier. Hence, the optimizer encourages small values to become smaller while augmenting the amplitude of outliers [13].…”
Section: Approaching the L 0 -Normmentioning
confidence: 99%