2015
DOI: 10.3390/e17107149
|View full text |Cite
|
Sign up to set email alerts
|

Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion

Abstract: Abstract:The maximum correntropy criterion (MCC) has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE) criterion, the proposed algorithm can achieve better convergence performance especially in the presence… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 73 publications
(42 citation statements)
references
References 43 publications
(43 reference statements)
0
40
0
Order By: Relevance
“…Based on the ZA techniques [4][5][6][10][11][12][13][22][23][24][25][26][27][28]32] and CIM theory [33][34][35][36][37][38][39][40], we propose a robust sparse LMMN algorithm by exerting a CIM penalty on the channel coefficient vector, and we utilize this constrained term to modify the cost function of the traditional LMMN algorithm. As we know, in the CIM theory, CIM can be used for measuring a similarity in kernel space between two random vectors p = { p 1 , · · · , p N } and q = { q 1 , · · · , q N }, which can be described as [33][34][35][36][37][38][39][40]:…”
Section: Proposed Sparse Cim-lmmn Algorithmmentioning
confidence: 99%
“…Based on the ZA techniques [4][5][6][10][11][12][13][22][23][24][25][26][27][28]32] and CIM theory [33][34][35][36][37][38][39][40], we propose a robust sparse LMMN algorithm by exerting a CIM penalty on the channel coefficient vector, and we utilize this constrained term to modify the cost function of the traditional LMMN algorithm. As we know, in the CIM theory, CIM can be used for measuring a similarity in kernel space between two random vectors p = { p 1 , · · · , p N } and q = { q 1 , · · · , q N }, which can be described as [33][34][35][36][37][38][39][40]:…”
Section: Proposed Sparse Cim-lmmn Algorithmmentioning
confidence: 99%
“…It is noteworthy that well-known second-order statistics, such as the mean square error (MSE) depends heavily on the Gaussian and linear assumptions [17]. However, in presence of non-Gaussian noise and in particular large outliers, i.e., observations greatly deviated from the data bulk, the effectiveness of the MSE-based algorithms will significantly deteriorate [29]. By contrast, the maximization of the correntropy criterion is appropriate for non-Gaussian signal processing, and is robust in particular against large outliers, as shown next.…”
Section: A Correntropymentioning
confidence: 99%
“…Nevertheless, conventional identification algorithms, in which least squares or quadratic Lyapunov stability theory are used, are more sensitive to larger errors. To circumvent this difficulty, three possible solutions can be found in literature: (i) outlier modeling; (ii) identification on the basis of constrained LS, which guarantees boundedness of the square norm of the parameters update; and (iii) parameter estimation by minimizing special cost functions that are less sensitive to larger noises such as the p th power of the absolute value of the prediction error piecewise and Gaussian loss functions . In the work of Cui et al, a differentiable function, which approximates the absolute value function, has been proposed to meet the differentiability required by most optimization algorithms.…”
Section: Introductionmentioning
confidence: 99%