2016
DOI: 10.1016/j.dsp.2016.03.010
|View full text |Cite
|
Sign up to set email alerts
|

Sparse signal reconstruction via concave continuous piecewise linear programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Note that with d = 1, the PWL function f (x x x) is reduced to be linear, thus, we regard linear functions as a special case of PWL functions throughout the Primer. Compared to other nonlinear models, PWL functions possess explicit geometric interpretation, and many practical systems can be easily transformed into PWL nonlinear functions 37 , such as PWL memristors [G] 38,39 , specialized cost functions [40][41][42][43][44] , and part mathematical programmings [45][46][47][48][49][50] . As powerful nonlinear models, PWL functions are proven universal approximators 51 : let Ω ⊂ R n be a compact domain, and p(x x x) : Ω → R be a continuous function.…”
Section: Pwl Functionsmentioning
confidence: 99%
“…Note that with d = 1, the PWL function f (x x x) is reduced to be linear, thus, we regard linear functions as a special case of PWL functions throughout the Primer. Compared to other nonlinear models, PWL functions possess explicit geometric interpretation, and many practical systems can be easily transformed into PWL nonlinear functions 37 , such as PWL memristors [G] 38,39 , specialized cost functions [40][41][42][43][44] , and part mathematical programmings [45][46][47][48][49][50] . As powerful nonlinear models, PWL functions are proven universal approximators 51 : let Ω ⊂ R n be a compact domain, and p(x x x) : Ω → R be a continuous function.…”
Section: Pwl Functionsmentioning
confidence: 99%
“…It is a signal processing technique to efficiently acquire and reconstruct signals, by finding solutions to underdetermined linear systems. It exploits the sparsity of the signal to recover it and thus, it uses far fewer samples than required by the sampling theorem [7,8]. A source is sparse in a given representation domain if most of its elements are close to zero.…”
mentioning
confidence: 99%