ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9054356
|View full text |Cite
|
Sign up to set email alerts
|

Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently

Abstract: Multi-channel sparse blind deconvolution, or convolutional sparse coding, refers to the problem of learning an unknown filter by observing its circulant convolutions with multiple input signals that are sparse. This problem finds numerous applications in signal processing, computer vision, and inverse problems. However, it is challenging to learn the filter efficiently due to the bilinear structure of the observations with the respect to the unknown filter and inputs, leading to global ambiguities of identific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 43 publications
0
11
0
Order By: Relevance
“…At the moment, one can observe how researchers are moving away from convex optimization or fitting for it by examining related problems. An example is the work on multichannel deconvolution [121,122].…”
Section: Optimization-based Deconvolution Methodsmentioning
confidence: 99%
“…At the moment, one can observe how researchers are moving away from convex optimization or fitting for it by examining related problems. An example is the work on multichannel deconvolution [121,122].…”
Section: Optimization-based Deconvolution Methodsmentioning
confidence: 99%
“…As we discuss in the following, many engineering problems can be naturally cast as separable nonsmooth optimization over Stiefel manifold. Taking 1 norm loss as an example, one may argue that the nonsmoothness of 1 norm can be avoided by considering its smooth variants such as Huber loss [41,57] or log cosh(•) function [31,62,66]. However, in practice nonsmooth optimization formulation is found to have several clear advantages than its smooth counterpart: (i) It better promotes the robustness of the solution against outliers [15,24,47], and requires fewer samples for exact recovery [5] than its smooth loss variant [66]; (ii) solving it can directly return the exact solutions [5,47,79], while optimizing its smoothing variants only produces approximate solutions [48,57,65,66].…”
Section: Motivationsmentioning
confidence: 99%
“…where C yi denotes the circulant matrix of y i and P is a preconditioning matrix that whitens the data (see [48,57,62] for more details). Although smooth variants of ( 6) have been considered in [48,57,62], as aforementioned, experimental results in [57] suggest that directly optimizing the nonsmooth objective (6) via Riemannian subgradient method demonstrates much superior performances.…”
Section: Motivationsmentioning
confidence: 99%
“…M Any machine learning and modern signal processing applications − such as bio-metric authentication/identification and recommending systems − , follow sparse signal processing techniques [1], [2], [3], [4], [5], [6]. The sparse synthesis model focuses on those data sets that can be approximated using a linear combination of only a small number of cells of a dictionary.…”
Section: Introductionmentioning
confidence: 99%