2022
DOI: 10.3390/rs14153520
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Sparse Bayesian Learning STAP Algorithm with Adaptive Laplace Prior

Abstract: Space-time adaptive processing (STAP) encounters severe performance degradation with insufficient training samples in inhomogeneous environments. Sparse Bayesian learning (SBL) algorithms have attracted extensive attention because of their robust and self-regularizing nature. In this study, a computationally efficient SBL STAP algorithm with adaptive Laplace prior is developed. Firstly, a hierarchical Bayesian model with adaptive Laplace prior for complex-value space-time snapshots (CALM-SBL) is formulated. La… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…where η > 0 is the regularisation parameter balancing the datafitting term kX − ΦAk 2 F versus the sparse penalty term kAk 2;0 . Nevertheless, solving the above optimisation problem is NP-hard [36], so many recently proposed algorithms have been developed to efficiently find sparse solutions [24][25][26][27].…”
Section: Grid-based Sparse Recovery-based Space-time Adaptive Processingmentioning
confidence: 99%
See 4 more Smart Citations
“…where η > 0 is the regularisation parameter balancing the datafitting term kX − ΦAk 2 F versus the sparse penalty term kAk 2;0 . Nevertheless, solving the above optimisation problem is NP-hard [36], so many recently proposed algorithms have been developed to efficiently find sparse solutions [24][25][26][27].…”
Section: Grid-based Sparse Recovery-based Space-time Adaptive Processingmentioning
confidence: 99%
“…The canonical l2,0 ${\mathscr{l}}_{2,0}$ MNM cost function of SR‐STAP is formulated by minA0.5emXΦAF2+ηbold-italicA2,0 $\underset{\mathbf{A}}{\min }\ {\Vert \boldsymbol{X}-\boldsymbol{\Phi }\boldsymbol{A}\Vert }_{\mathrm{F}}^{2}+\eta {\Vert \boldsymbol{A}\Vert }_{2,0}$ where η>0 $\eta > 0$ is the regularisation parameter balancing the data‐fitting term XΦAF2 ${\Vert \boldsymbol{X}-\boldsymbol{\Phi }\boldsymbol{A}\Vert }_{\mathrm{F}}^{2}$ versus the sparse penalty term bold-italicA2,0 ${\Vert \boldsymbol{A}\Vert }_{2,0}$. Nevertheless, solving the above optimisation problem is NP‐hard [36], so many recently proposed algorithms have been developed to efficiently find sparse solutions [24–27].…”
Section: Sparse Recovery‐based Space‐time Adaptive Processing Prelimi...mentioning
confidence: 99%
See 3 more Smart Citations