2016
DOI: 10.1002/asjc.1425
|View full text |Cite
|
Sign up to set email alerts
|

Modal Kalman Filter

Abstract: In the Extended Kalman Filter (EKF), only the first‐order term of the Taylor series is employed. Hence, the nonlinearities in the system dynamics are not fully considered. In the proposed method, to overcome this drawback, the higher‐order terms of the Taylor series are considered and a new filter, based on the Modal series, is designed. In this paper, based on the Modal series and careful approximations, a nonlinear filter is converted to a series of linear filters, and the extracted filter is named the Modal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 16 publications
0
11
0
Order By: Relevance
“…These approaches can be primarily classified into two categories, approximating either the nonlinear function or the nonlineartransformed PDFs. The former, with typical examples of extended KF (EKF), modal KF (Mohammaddadi et al, 2017), divided difference filter (Nørgaard et al, 2000;Wang et al, 2017), and Fourier-Hermite KF (Sarmavuori and Särkkä, 2012), seeks functions' approximation using polynomial expansions (e.g., Taylor series, Fourier-Hermite series, Stirling's interpolation, or Modal series). The latter, with representative examples of unscented KF (UKF) (Julier and Uhlmann, 2004), Gauss-Hermite filter and central difference filter (Ito and Xiong, 2000), cubature KF (CKF) (Arasaratnam and Haykin, 2009;Jia et al, 2013), sparse-grid quadrature filter (Arasaratnam and Haykin, 2008;Jia et al, 2012), stochastic integration filter (Duník et al, 2013), and iterated posterior linearization filter (IPLF) (García-Fernández et al, 2015b;Raitoharju et al, 2017), is based on a set of deterministically chosen weighted sigma points.…”
Section: Nonlinearitymentioning
confidence: 99%
“…These approaches can be primarily classified into two categories, approximating either the nonlinear function or the nonlineartransformed PDFs. The former, with typical examples of extended KF (EKF), modal KF (Mohammaddadi et al, 2017), divided difference filter (Nørgaard et al, 2000;Wang et al, 2017), and Fourier-Hermite KF (Sarmavuori and Särkkä, 2012), seeks functions' approximation using polynomial expansions (e.g., Taylor series, Fourier-Hermite series, Stirling's interpolation, or Modal series). The latter, with representative examples of unscented KF (UKF) (Julier and Uhlmann, 2004), Gauss-Hermite filter and central difference filter (Ito and Xiong, 2000), cubature KF (CKF) (Arasaratnam and Haykin, 2009;Jia et al, 2013), sparse-grid quadrature filter (Arasaratnam and Haykin, 2008;Jia et al, 2012), stochastic integration filter (Duník et al, 2013), and iterated posterior linearization filter (IPLF) (García-Fernández et al, 2015b;Raitoharju et al, 2017), is based on a set of deterministically chosen weighted sigma points.…”
Section: Nonlinearitymentioning
confidence: 99%
“…where D is the set value by which to narrow the bounds, and v i0 and v iN are the maximum and minimum values of each row, respectively. After the ranges of each parameter have been updated, the parameter matrix is reconstructed according to equation (19), and the initial pheromone values of each point are reset according to equation (21). The search process is then repeated until the bias between v il and v iu finally achieves an established accuracy e, as illustrated by the overall process flowchart shown in Figure 4.…”
Section: Ant Colony Algorithm To Obtain Optimal Parametersmentioning
confidence: 99%
“…They are usually carried out based on the closed‐form Markov–Bayes recursion [8], e.g. Kalman filter (KF), extended KF, model KF [9], unscented KF [10], and cubature KF [11]. When the state‐space model and measurement model are all linear and Gaussian, the KF is the optimum filter without bias.…”
Section: Introductionmentioning
confidence: 99%