ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8683334
|View full text |Cite
|
Sign up to set email alerts
|

Multichannel Sparse Blind Deconvolution on the Sphere

Abstract: Multichannel blind deconvolution is the problem of recovering an unknown signal f and multiple unknown channels x i from their circular convolution y i = x i f (i = 1, 2, . . . , N ). We consider the case where the x i 's are sparse, and convolution with f is invertible. Our nonconvex optimization formulation solves for a filter h on the unit sphere that produces sparse output y i h. Under some technical assumptions, we show that all local minima of the objective function correspond to the inverse filter of f … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(42 citation statements)
references
References 64 publications
(133 reference statements)
0
42
0
Order By: Relevance
“…In particular, consider the interesting regime when θ = O(1) and κ = O(1), it is sufficient to set µ = O (log n) −1/6 n −3/4 , which leads to a sample size p = O(n 4.5 ) up to logarithmic factors. This significantly improves over the prior work of Li and Bresler [4], which requires a sample complexity of p = O(n 9 ) up to logarithmic factors. See further discussions in Section 5.…”
Section: Convergence Guarantees Of Mgdmentioning
confidence: 85%
See 3 more Smart Citations
“…In particular, consider the interesting regime when θ = O(1) and κ = O(1), it is sufficient to set µ = O (log n) −1/6 n −3/4 , which leads to a sample size p = O(n 4.5 ) up to logarithmic factors. This significantly improves over the prior work of Li and Bresler [4], which requires a sample complexity of p = O(n 9 ) up to logarithmic factors. See further discussions in Section 5.…”
Section: Convergence Guarantees Of Mgdmentioning
confidence: 85%
“…As we shall see later, while this approach works well when C(g) is an orthogonal matrix, further care needs to be taken when it is a general invertible matrix in order to guarantee a benign optimization geometry. Following [4,10], we introduce the following pre-conditioned optimization problem:…”
Section: Nonconvex Optimization On the Spherementioning
confidence: 99%
See 2 more Smart Citations
“…On the other hand, we believe the probability tools of decoupling and measure concentration we developed here can form a solid foundation for studying other nonconvex problems under the random convolutional model. Those problems include blind calibration [80][81][82], sparse blind deconvolution [76,[83][84][85][86][87][88][89][90][91][92][93], and convolutional dictionary learning [16,[94][95][96][97]].…”
Section: Geometric Analysis and Global Resultmentioning
confidence: 99%