2017
DOI: 10.1016/j.engappai.2016.11.010
|View full text |Cite
|
Sign up to set email alerts
|

Robust kernel adaptive filters based on mean p-power error for noisy chaotic time series prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
35
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 65 publications
(35 citation statements)
references
References 48 publications
0
35
0
Order By: Relevance
“…Alternative loss functions, designed in a similar way to that of the correntropic loss, can be also found in [23,24]. Among the previous methods, approaches that employ KRLS-type of iterations can be found in [12,20,21]. The learning task: With the positive integer ∈ ℤ >0 denoting discrete time, and with the input-output data pair ( , ) becoming available to the user at time , the goal is to devise an online non-parametric algorithm to learn the unknown non-linear and multi-output system via a reproducing kernel Hilbert space ℋ, where the ×1 vector may carry also noise and outliers.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Alternative loss functions, designed in a similar way to that of the correntropic loss, can be also found in [23,24]. Among the previous methods, approaches that employ KRLS-type of iterations can be found in [12,20,21]. The learning task: With the positive integer ∈ ℤ >0 denoting discrete time, and with the input-output data pair ( , ) becoming available to the user at time , the goal is to devise an online non-parametric algorithm to learn the unknown non-linear and multi-output system via a reproducing kernel Hilbert space ℋ, where the ×1 vector may carry also noise and outliers.…”
Section: Introductionmentioning
confidence: 99%
“…Rather than adapting the classical RLS iterations [6] into KAF, as in [7,12,20,21], the proposed Algorithm 1 stems from the stochastic-approximation framework [26]. Similarly to [12], a sample average ℓ -norm error loss is used also here to define the objective function in the optimization problem of the learning task. Nevertheless, apart from the outlier contaminated data and in contrast to [12,20,21], the present framework allows also the use of faithful data, devoid of noise and outliers, as side information.…”
Section: Introductionmentioning
confidence: 99%
“…The MPE criterion based on the pth absolute moment of the error can deal with non-Gaussian data with a proper p-value, efficiently. In general, MPE is robust to large outliers when p < 2 [15], generating robust adaptive filters [15], e.g., the kernel least mean p-power (KLMP) algorithm [18] and the kernel recursive least mean p-power (KRLP) algorithm [18]. ITL can incorporate the complete distribution of errors into the learning process, resulting in the improvement of filtering precision and robustness to outliers.…”
Section: Introductionmentioning
confidence: 99%
“…As a commonly used OLA, the kernel adaptive filter (KAF) [2] is proposed to solve complicated nonlinear issues. However, the network size of KAF is linearly growing, thus leading to high computational overhead.…”
mentioning
confidence: 99%
“…where θ k denotes the weight in the RFFS. In the following, Bayesian inference based on the assumption of Gaussian distribution for the weight and noise is used to estimate θ k in the network with the fixed dimension D shown in (2). First, a Gaussian diffusion process associated with a Gaussian distribution noise N (q; 0, δ 2 D I) is used to model the system parameters, i.e.,…”
mentioning
confidence: 99%