2019
DOI: 10.1080/10618600.2019.1647216
|View full text |Cite
|
Sign up to set email alerts
|

Parallelization of a Common Changepoint Detection Method

Abstract: In recent years, various means of efficiently detecting changepoints in the univariate setting have been proposed, with one popular approach involving minimising a penalised cost function using dynamic programming. In some situations, these algorithms can have an expected computational cost that is linear in the number of data points; however, the worst case cost remains quadratic. We introduce two means of improving the computational performance of these methods, both based on parallelising the dynamic progra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 20 publications
1
14
0
Order By: Relevance
“…It can be shown (e.g. Tickle et al, 2018) that the Schwarz' information criterion is asymptotically equivalent to the ℓ 0 penalization. Note that the results obtained there are asymptotic, while ours are non-asymptotic and allow all parameters to vary as the sample size n. Another related area is the reduced isotonic regression problem, which assumes the monotonic signal is piecewise-constant and which aims to recover the signal.…”
Section: Introductionmentioning
confidence: 99%
“…It can be shown (e.g. Tickle et al, 2018) that the Schwarz' information criterion is asymptotically equivalent to the ℓ 0 penalization. Note that the results obtained there are asymptotic, while ours are non-asymptotic and allow all parameters to vary as the sample size n. Another related area is the reduced isotonic regression problem, which assumes the monotonic signal is piecewise-constant and which aims to recover the signal.…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, employing a pruning step as per the PELT procedure of Killick et al (2012) results in an expected cost of O (qd). As shown in Tickle et al (2020), this can be improved further to a worst-case cost of O (qd) using parallelisation. Therefore, the worst-case computational complexity of the post-processing step is O (nd).…”
Section: Subset Pseudo-code Post-processing and Computational Discussionmentioning
confidence: 99%
“…The criterion to optimize contains a penalty that is linear in the number of change-points. It is classically set to 2kσ 2 log(n) for change-in-mean problem [34,31], with σ2 the estimated noise variance and k the number of changes. For change-in-slope problem, theorems for a similar penalty have been given in asymptotic regime [36].…”
Section: Parameter Estimationmentioning
confidence: 99%
“…This penalty controls the amount of evidence we need to add a change: the greater this quantity, the less is k. We emphasise that k is an unknown quantity. The penalty value is often set to 2σ 2 log(n) [34,31,36] with σ2 an estimation of the variance σ 2 .…”
Section: Optimization Problemmentioning
confidence: 99%