2020
DOI: 10.1007/s00521-020-04916-5
|View full text |Cite
|
Sign up to set email alerts
|

On time series representations for multi-label NILM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 32 publications
(19 citation statements)
references
References 49 publications
0
19
0
Order By: Relevance
“…The F1 score is used as a general evaluation indicator for evaluating multi-classification problems, which is selected for evaluation of models in most essays. Except that [26] and [27] use the low-frequency dataset of UK-DALE and REDD, all the others use highfrequency datasets. As seen from TABLE II, on the PLAID dataset, compared with the ordinary CNN, the algorithm proposed in this paper has increased by 3.88%.…”
Section: E Experiments Resultsmentioning
confidence: 99%
“…The F1 score is used as a general evaluation indicator for evaluating multi-classification problems, which is selected for evaluation of models in most essays. Except that [26] and [27] use the low-frequency dataset of UK-DALE and REDD, all the others use highfrequency datasets. As seen from TABLE II, on the PLAID dataset, compared with the ordinary CNN, the algorithm proposed in this paper has increased by 3.88%.…”
Section: E Experiments Resultsmentioning
confidence: 99%
“…Furthermore, the authors compare their algorithm to other cutting edge multi-label NILM approaches such as classification based on extreme learning machines (ELM) [34], graph-based semi-supervised learning [35], and an approach based on deep dictionary learning and deep transform learning [36]. Nalmpantis and Vrakas [37] present a multi-label NILM based on the Signal2Vec algorithm that maps any time series into a vector space. A deep neural network (DNN) based multi-label NILM applying active power features at low-sampling frequency is proposed in [23,24].…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, our results cannot be directly compared with the ones presented in [49], as these were obtained from a private dataset, besides the very different experimental settings including a different performance metric. In [23,37], the F 1 macro score for TCNN and FCNN DNN based multi-label classifiers are given; however, they use UK-DALE dataset making the comparison irrelevant.…”
Section: Complexity Analysismentioning
confidence: 99%
“…Recognizing many appliances with one model has attracted the interest of many researchers as well. Multi-label approaches usually identify on-off states of a predefined number of appliances [3,4]. This research focuses on the single regression approach, aiming to develop a computationally efficient energy disaggregator.…”
Section: Introductionmentioning
confidence: 99%