2016
DOI: 10.1007/s11222-016-9636-3
|View full text |Cite
|
Sign up to set email alerts
|

On optimal multiple changepoint algorithms for large data

Abstract: Many common approaches to detecting changepoints, for example based on statistical criteria such as penalised likelihood or minimum description length, can be formulated in terms of minimising a cost over segmentations. We focus on a class of dynamic programming algorithms that can solve the resulting minimisation problem exactly, and thus find the optimal segmentation under the given statistical criteria. The standard implementation of these dynamic programming methods have a computational cost that scales at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
176
1
5

Year Published

2017
2017
2021
2021

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 161 publications
(183 citation statements)
references
References 26 publications
1
176
1
5
Order By: Relevance
“…Killick et al (2012) introduced the pruned exact linear time (PELT) method, which has the worst case computational cost of order O(n 2 ); while in the situations where the number of change points increases linearly with n, the expected time of PELT is of order O(n). There are also other algorithms, including Rigaill (2010) and Maidstone et al (2017), which have been shown to have an expected cost which is smaller than that of PELT, but which have the worst case cost also of order O(n 2 ).…”
Section: Introductionmentioning
confidence: 99%
“…Killick et al (2012) introduced the pruned exact linear time (PELT) method, which has the worst case computational cost of order O(n 2 ); while in the situations where the number of change points increases linearly with n, the expected time of PELT is of order O(n). There are also other algorithms, including Rigaill (2010) and Maidstone et al (2017), which have been shown to have an expected cost which is smaller than that of PELT, but which have the worst case cost also of order O(n 2 ).…”
Section: Introductionmentioning
confidence: 99%
“…Precisely, for two indexes t and s (t < s < T ), the pruning rule is given by: An extension of Pelt is described in [9] to solve the linearly penalized change point detection for a range of smoothing parameter values [β min , β max ]. Pelt has been applied on DNA sequences [16,17], physiological signals [89], and oceanographic data [111].…”
Section: Solution To Problem 2 (P2): Peltmentioning
confidence: 99%
“…Indeed, the inference of these parameters requires to visit the whole segmentation space, which is prohibitive in terms of computational time when the visit is performed in a naive way. The Dynamic Programming (DP) algorithm (introduced by [16] and used for the first time in segmentation by [17]) and, recently its pruned versions [18,19,20], is the only efficient algorithm that retrieves the exact solution (i.e. the optimal segmentation according to the log-likelihood or least-square contrasts for example) in a faster way.…”
Section: Introductionmentioning
confidence: 99%