2016
DOI: 10.1109/tvt.2016.2542108
|View full text |Cite
|
Sign up to set email alerts
|

A Convex Relaxation Approach to Higher-Order Statistical Approaches to Signal Recovery

Abstract: In this work, we investigate an efficient numerical approach for solving higher order statistical methods for blind and semi-blind signal recovery from non-ideal channels. We develop numerical algorithms based on convex optimization relaxation for minimization of higher order statistical cost functions. The new formulation through convex relaxation overcomes the local convergence problem of existing gradient descent based algorithms and applies to several well-known cost functions for effective blind signal re… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 33 publications
(74 reference statements)
0
4
0
Order By: Relevance
“…The most popular ones are based on Godard's constant modulus algorithm (CMA), Shalvi-Weistein (SWA), and minimum entropy criterion (MED) [19] algorithms. These solutions are based on the constant amplitude of phase-modulated signals such as PSK waveforms.…”
Section: Related Workmentioning
confidence: 99%
“…The most popular ones are based on Godard's constant modulus algorithm (CMA), Shalvi-Weistein (SWA), and minimum entropy criterion (MED) [19] algorithms. These solutions are based on the constant amplitude of phase-modulated signals such as PSK waveforms.…”
Section: Related Workmentioning
confidence: 99%
“…The computation complexity of Algorithm 1 is mainly determined by the problem (36), which is a semi-definite programming (SDP) problem and can be solved by IPM. According to [35], the computational order of an SDP problem with g optimisation variables using the IPM is given by O(g 3.5 ). For the problem (36), the number of its optimised variables is counted as…”
Section: Iterative Sca Algorithmmentioning
confidence: 99%
“…Specifically, they work as the corresponding normalized version when the consistency rule is met, otherwise they disregard the cross-correlation term and estimate the dispersion error by a linear function. Actually, all the above online algorithms ordinarily exist multiple local minima and require huge amounts of samples to converge [18], [19].…”
Section: Introductionmentioning
confidence: 99%