2011
DOI: 10.1098/rspa.2010.0674
|View full text |Cite
|
Sign up to set email alerts
|

Generalized methods and solvers for noise removal from piecewise constant signals. II. New methods

Abstract: Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoisin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0
1

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(48 citation statements)
references
References 50 publications
0
47
0
1
Order By: Relevance
“…BIC is already known to be an asymptotic result that is applicable only at large N (22), but it is particular poorly suited to the change-point problem because the BIC complexity is too small for small N and much too large for large N, and therefore it is difficult to recommend this approach under any circumstance. Little, Jones, and co-workers (1,5,23,24) have recently introduced a number of convex methods closely related to change-point analysis. Although convexity is clearly a desirable property of an algorithm, the mathematical meaning of the convexified optimization is less clear.…”
Section: Competing Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…BIC is already known to be an asymptotic result that is applicable only at large N (22), but it is particular poorly suited to the change-point problem because the BIC complexity is too small for small N and much too large for large N, and therefore it is difficult to recommend this approach under any circumstance. Little, Jones, and co-workers (1,5,23,24) have recently introduced a number of convex methods closely related to change-point analysis. Although convexity is clearly a desirable property of an algorithm, the mathematical meaning of the convexified optimization is less clear.…”
Section: Competing Techniquesmentioning
confidence: 99%
“…The approach we discuss in this article is called ''change-point analysis'' and has been applied widely (1)(2)(3)(4)(5), including previous applications to single-molecule biophysics problems (6)(7)(8)(9)(10). We have recently developed an information-based approach to model selection, one that is new to our knowledge: the frequentist information criterion (FIC), an approach that greatly simplifies the analysis (C. H. LaMont and P. A. Wiggins, unpublished).…”
Section: Introductionmentioning
confidence: 97%
“…In (2.13), we have to solve a J-jump sparsity problem for A = id of the form Nonetheless, the method has quadratic complexity with respect to the data size L. As in the case of the Potts problem with A = id, there are greedy algorithms which may perform well, in practice, but which do not guarantee a global optimum; see [7,8] for a discussion.…”
Section: Theorem 23 the J-jump Sparsity Problem (S J ) Is Np-hardmentioning
confidence: 99%
“…Furthermore, there are strategies to prune the search space which speed up the computations in practice [48,49]. Besides the dynamic programming approach there are greedy algorithms which perform well in practice but which come without theoretical guarantees; for a discussion, see [7,8].…”
Section: Theorem 21 the Potts Problem (P γ ) Has A Minimizermentioning
confidence: 99%
“…The optimization problem is thus iteratively approaching the solution for the ideal l 0 -norm objective function. We also mention that, an interesting review is made in [9,10], on several solvers for the general problem of mixed l p -l 0 -norm minimization in the context of piece-wise constant function approximation, which indeed their adaptation to the problem of sparse linear prediction analysis can be beneficial (particularly the stepwise jump penalization algorithm, which is shown to be highly efficient and reliable in detection of sparse events).…”
Section: Introductionmentioning
confidence: 99%