2018
DOI: 10.1109/tit.2017.2784481
|View full text |Cite
|
Sign up to set email alerts
|

Blind Demixing and Deconvolution at Near-Optimal Rate

Abstract: We consider simultaneous blind deconvolution of r source signals from their noisy superposition, a problem also referred to blind demixing and deconvolution. This signal processing problem occurs in the context of the Internet of Things where a massive number of sensors sporadically communicate only short messages over unknown channels. We show that robust recovery of message and channel vectors can be achieved via convex optimization when random linear encoding using i.i.d. complex Gaussian matrices is used a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
52
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 42 publications
(53 citation statements)
references
References 68 publications
1
52
0
Order By: Relevance
“…The extension turns out to be nontrivial since the "incoherence" between multiple sources for blind demixing leads to distortion to the statistical property in the single source scenario for blind deconvolution. The similar challenge has also been observed in [1], [2] by extending the convex relaxation approach (i.e., semidefinite programming) for blind deconvolution to the setting of blind demixing. Furthermore, the noisy measurements also bring additional challenges to establish theoretical guarantees.…”
Section: Introductionmentioning
confidence: 62%
See 2 more Smart Citations
“…The extension turns out to be nontrivial since the "incoherence" between multiple sources for blind demixing leads to distortion to the statistical property in the single source scenario for blind deconvolution. The similar challenge has also been observed in [1], [2] by extending the convex relaxation approach (i.e., semidefinite programming) for blind deconvolution to the setting of blind demixing. Furthermore, the noisy measurements also bring additional challenges to establish theoretical guarantees.…”
Section: Introductionmentioning
confidence: 62%
“…The incoherence between b j and h i for 1 ≤ i ≤ s, 1 ≤ j ≤ m specifies the smoothness of the loss function (2). Within the region of incoherence and contraction (defined in Section IV-A) that enjoys the qualified level of smoothness, the step size for iterative refinement procedure can be chosen more aggressively according to generic optimization theory [10].…”
Section: B Theoretical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our previous work [23] shows that the convex approach via semidefinite programming (see (2.10)) requires L ≥ C 0 s 2 (K + µ 2 h N ) log 3 (L) to ensure exact recovery. Later, [15] improves this result to the near-optimal bound L ≥ C 0 s(K + µ 2 h N ) up to some log-factors. The difference between nonconvex and convex methods lies in the appearance of the condition number κ in (3.5).…”
Section: )mentioning
confidence: 78%
“…The question of when the solution of (2.10) yields exact recovery is first answered in our previous work [23]. Late, [29,15] have improved this result to the near-optimal bound L ≥ C 0 s(K + N ) up to some log-factors where the main theoretical result is informally summarized in the following theorem. While the SDP relaxation is definitely effective and has theoretic performance guarantees, the computational costs for solving an SDP already become too expensive for moderate size problems, let alone for large scale problems.…”
Section: Convex Versus Nonconvex Approachesmentioning
confidence: 99%