2008
DOI: 10.5194/npg-15-321-2008
|View full text |Cite
|
Sign up to set email alerts
|

How does the quality of a prediction depend on the magnitude of the events under study?

Abstract: Abstract. We investigate the predictability of extreme events in time series. The focus of this work is to understand, under which circumstances large events are better predictable than smaller events. Therefore we use a simple prediction algorithm based on precursory structures which are identified via the maximum likelihood principle. Using theses precursory structures we predict threshold crossings in autocorrelated processes of order one, which are either Gaussian, exponentially or Pareto distributed. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

3
11
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(14 citation statements)
references
References 16 publications
3
11
0
Order By: Relevance
“…Franzke [14] applied the method set out in [10] to predict extreme threshold exceedances in a systematically derived stochastic dynamical system representing climate variability by the resolved (slow) variable(s) and weather variability by noise in place of the unresolved (fast) variable(s) [16]. He maintained the earlier statement [8] in this case, measuring the prediction skill by the ROC statistics, but on the basis of considering only two high threshold values. On the other hand, Sterk et al [15] considered a number of dynamical systems of various complexity, and various physical observables.…”
Section: Introductionmentioning
confidence: 99%
“…Franzke [14] applied the method set out in [10] to predict extreme threshold exceedances in a systematically derived stochastic dynamical system representing climate variability by the resolved (slow) variable(s) and weather variability by noise in place of the unresolved (fast) variable(s) [16]. He maintained the earlier statement [8] in this case, measuring the prediction skill by the ROC statistics, but on the basis of considering only two high threshold values. On the other hand, Sterk et al [15] considered a number of dynamical systems of various complexity, and various physical observables.…”
Section: Introductionmentioning
confidence: 99%
“…However, the above mentioned examples of critical transitions that have been predicted and observed only once [13,18] do not allow us to access the quality of the predictors in any statistically relevant way. It is in fact common for indicators of extreme events [21][22][23][24][25][26][27] to be tested with standard measures for the quality of classifiers, such as skill scores [28,29], contingency tables [30] or receiver operator characteristic curves (ROC curves) [30]. Additionally it is common to test for the dependence of the prediction success on * xzhang@nld.ds.mpg.de † ck274@cornell.edu ‡ shallerberg@nld.ds.mpg.de parameters related to the estimation of predictors, the prediction procedure and the events under study.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally it is common to test for the dependence of the prediction success on * xzhang@nld.ds.mpg.de † ck274@cornell.edu ‡ shallerberg@nld.ds.mpg.de parameters related to the estimation of predictors, the prediction procedure and the events under study. Such parameters could be, for example, the length of the data record used to estimate the predictor, the lead time (time between issuing the forecast and the observation of the event) or the magnitude and relative frequency of the event under study [23]. Similar tests for indicators associated with CSD are-apart from one study [31]-still missing.…”
Section: Introductionmentioning
confidence: 99%
“…stochastic processes large increments are better predictable 2 Complexity if the process is Gaussian, whereas large increments become less predictable if the underlying distribution has a power law tail. However, in the follow-up study [12], which is concerned with threshold crossings instead of increments, it was found again that extremes are always better predictable. The first conclusion is also supported by the work of Franzke [13,14] in the context of dynamic-stochastic models.…”
Section: Introductionmentioning
confidence: 99%
“…The predictability of extremes can be measured in different ways. By treating extreme events as binary events one can measure prediction skill by means of a receiver operator characteristic (ROC) curve which is a graph of the hit rate against the false alarm rate [9][10][11][12][13][14]. Another possible measure is the extreme dependency score developed by Stephenson et al [17], which does not tend to zero for vanishingly rare events unlike scores such as the equitable threat score.…”
Section: Introductionmentioning
confidence: 99%