2020 6th Conference on Data Science and Machine Learning Applications (CDMA) 2020
DOI: 10.1109/cdma47397.2020.00006
|View full text |Cite
|
Sign up to set email alerts
|

Range Based Confusion Matrix for Imbalanced Time Series Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…True negatives are more difficult to determine. Even though Zhou and Del Valle [32] recommend treating all the rest as true negatives, that approach does not hold in our case, as there is no predetermined number of change events that can happen during a given time series. The maximum number of events given a one-year spacing interval for the three years of reference data would be seven (four predicted breaks and three reference points of change).…”
Section: Reference Data and Validationmentioning
confidence: 97%
See 1 more Smart Citation
“…True negatives are more difficult to determine. Even though Zhou and Del Valle [32] recommend treating all the rest as true negatives, that approach does not hold in our case, as there is no predetermined number of change events that can happen during a given time series. The maximum number of events given a one-year spacing interval for the three years of reference data would be seven (four predicted breaks and three reference points of change).…”
Section: Reference Data and Validationmentioning
confidence: 97%
“…For the purpose of assessing how well time series break predictions match the reference data, we developed a distance-in-time approach similar to that proposed by Zhou and Del Valle [32]. We treated each reference and predicted break as an event and then determined whether it was a true positive, false positive, false negative or true negative.…”
Section: Reference Data and Validationmentioning
confidence: 99%
“…The model's effectiveness was measured using a confusion matrix, accuracy (CA), precision, recall, and F1-score (F1) [33]. The confusion matrix depicts the present state of the dataset as well as the number of accurate and wrong model predictions [34]. The proportion of accurate predictions to all predictions is measured by accuracy, which is a crucial and intuitive metric.…”
Section: Performance Evaluation Of the Modelmentioning
confidence: 99%
“…But for hazard prediction, it is desirable that a monitor generates alerts before a hazard happens. So we adopt a modified version of standard classification metrics [71], proposed for sequential data [72] [73] [74], where a tolerance window before the start time of hazard (t h ) is used for calculation of the metrics (see Fig. 6).…”
Section: Metricsmentioning
confidence: 99%