2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2009
DOI: 10.1109/allerton.2009.5394881
|View full text |Cite
|
Sign up to set email alerts
|

Random channel coding and blind deconvolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
26
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(26 citation statements)
references
References 14 publications
0
26
0
Order By: Relevance
“…These properties were first defined in [27] for the development of a probabilistic and RIPless theory of compressed sensing, and then used in [28] for the problem of blind spikes deconvolution. Other random subspace assumptions are also used in [29,30] for random channel coding and blind deconvolution. The transmitted signal in multi-user communication systems [24] can also be represented in a known low-dimensional random subspace in the case when each of the transmitters sends out a random signal for the sake of security, privacy, or spread spectrum communications.…”
Section: Theoretical Guarantee For Atomic Norm Denoisingmentioning
confidence: 99%
“…These properties were first defined in [27] for the development of a probabilistic and RIPless theory of compressed sensing, and then used in [28] for the problem of blind spikes deconvolution. Other random subspace assumptions are also used in [29,30] for random channel coding and blind deconvolution. The transmitted signal in multi-user communication systems [24] can also be represented in a known low-dimensional random subspace in the case when each of the transmitters sends out a random signal for the sake of security, privacy, or spread spectrum communications.…”
Section: Theoretical Guarantee For Atomic Norm Denoisingmentioning
confidence: 99%
“…Subspace membership and sparsity have been used as priors in blind deconvolution for a long time. Previous works either use these priors without theoretical justification [5][6][7][8][9], or impose probabilistic models and show successful recovery with high probability [10,15,16,18]. The sufficient conditions for the identifiability in BD in our prequel paper [19] are (except for a special class of so-called sub-band structured signals or filters) suboptimal.…”
Section: Identifiability Resultsmentioning
confidence: 99%
“…In this paper, we focus on subspace or sparsity assumptions on the signal and the filter. These priors, which render BD better-posed by reducing the search space, were shown to be effective constraints or regularizers in various applications [5][6][7][8][9][10]. However, despite the success in practice, the theoretical results on uniqueness in BD with a subspace or sparsity constraint are limited.Recently, the "lifting" scheme -recasting bilinear or quadratic inverse problems, such as blind deconvolution and phase retrieval, as rank-1 matrix recovery from linear measurements -has attracted considerable attention [10,11].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…We employ lifting in the same spirit as [6], [7] but our goals are different. Firstly, we deal with general bilinear inverse problems which include the linear convolution model of [8], the circular convolution model of [7], [9] and compressed bilinear observation model of [10] as special cases. Secondly, we focus solely on identifiability (as opposed to recoverability by convex optimization [6], [7]) and thus require far milder assumptions on the distribution of the input signals.…”
Section: Introductionmentioning
confidence: 99%