2012 IEEE International Workshop on Machine Learning for Signal Processing 2012
DOI: 10.1109/mlsp.2012.6349775
|View full text |Cite
|
Sign up to set email alerts
|

Redundant time-frequency marginals for chirplet decomposition

Abstract: This paper presents the foundations of a novel method for chirplet signal decomposition. In contrast to basispursuit techniques on over-complete dictionaries, the proposed method uses a reduced set of adaptive parametric chirplets. The estimation criterion corresponds to the maximization of the likelihood of the chirplet parameters from redundant time-frequency marginals. The optimization algorithm that results from this scenario combines Gaussian mixture models and Huber's robust regression in an iterative fa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2013
2013
2015
2015

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…In case of the unvoiced segment, the final mixture is more sparse, this indicating the presence of non-tonal components. The proposed technique is planned to be used within the method [24], proposed recently by one of the authors, to detect and to quantify chirp-tonal/stochastic signal components for audio analysis and coding.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…In case of the unvoiced segment, the final mixture is more sparse, this indicating the presence of non-tonal components. The proposed technique is planned to be used within the method [24], proposed recently by one of the authors, to detect and to quantify chirp-tonal/stochastic signal components for audio analysis and coding.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Given the low number of Gaussian functions (K = 40), the final utilization results in 100%. LS-SVR performance in this scenario is also excellent, 10 : in the noiseless case its learning capabilities are somewhat ahead of the proposed GFM, but in the regression test, SVR shows signs of overfitting, falling slightly behind GFM. However, the major difference arises when comparing the number of parameters to estimate: the proposed GFM deals with only KN = 600 parameters, while SVR complexity is equal to the size of the dataset L = 2401.…”
Section: ) Least-squares Support Vector Regression (Ls-svr) Withmentioning
confidence: 86%
“…where φ k corresponds to the vectorization of matrix Φ k (10). As the relevance ρ k (x) depends on all Gaussian parameters to be found, the null of the gradient results in a complicated system of nonlinear equations.…”
Section: B Numerical Algorithmmentioning
confidence: 99%
See 1 more Smart Citation