2014
DOI: 10.1109/taslp.2014.2324175
|View full text |Cite
|
Sign up to set email alerts
|

Nonlinear Acoustic Echo Cancellation Based on Sparse Functional Link Representations

Abstract: Recently, a new class of nonlinear adaptive filtering architectures has been introduced based on the functional link adaptive filter (FLAF) model. Here we focus specifically on the split FLAF (SFLAF) architecture, which separates the adaptation of linear and nonlinear coefficients using two different adaptive filters in parallel. This property makes the SFLAF a well-suited method for problems like nonlinear acoustic echo cancellation (NAEC), in which the separation of filtering tasks brings some performance im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 62 publications
(32 citation statements)
references
References 33 publications
0
32
0
Order By: Relevance
“…The optimization procedure involves a formulation similar to [23], but with a novel resulting derivation due to the full weighted mask. Let us define the following joint input and weight vectors, respectively:…”
Section: A Joint Derivation Of the Full Psflafmentioning
confidence: 99%
See 2 more Smart Citations
“…The optimization procedure involves a formulation similar to [23], but with a novel resulting derivation due to the full weighted mask. Let us define the following joint input and weight vectors, respectively:…”
Section: A Joint Derivation Of the Full Psflafmentioning
confidence: 99%
“…Taking into account the least-perturbation property and the natural gradient adaptation, as suggested in [23], it is possible to express the constrained optimization problem as:…”
Section: A Joint Derivation Of the Full Psflafmentioning
confidence: 99%
See 1 more Smart Citation
“…A formulation of the proportionate update based algorithms using the natural gradient descent adaptation has been studied in [28,29].…”
Section: Proportionate Update Approachmentioning
confidence: 99%
“…Furthermore, in the following, we provide the steady-state MSD of the mixed-norm algorithms under the assumption that the estimation error gets so small that we can neglect the relatively high order error terms. Since (28) for mixed-norm error objective yields …”
Section: Steady-state Analysismentioning
confidence: 99%